More stories

  • in

    Expanding the conversation about sustainability

    Stacy Godfreey-Igwe sat in her dorm room at MIT, staring frantically at her phone. An unprecedented snowstorm had hit her hometown of Richardson, Texas, and she was having difficulty contacting her family. She felt worried and frustrated, aware that nearby neighborhoods hadn’t lost power during the storm but that her family home had suffered significant damage. She finally got a hold of her parents, who had taken refuge in a nearby office building, but the experience left her shaken and more determined than ever to devote herself to addressing climate injustice.

    Godfreey-Igwe, the daughter of Nigerian immigrants, has long been concerned about how marginalized communities can shoulder a disproportionately heavy environmental burden. At MIT, she chose a double major in mechanical engineering with a concentration in global and sustainable development, and in African and African diaspora studies, a major she helped establish and became the first student to declare. Initially seeing the two fields as separate, she now embraces their intersectionality in her work in and out of the classroom.

    Through an Undergraduate Research Opportunity Program (UROP) project with Amah Edoh, the Homer A. Burnell Assistant Professor of Anthropology and African Studies at MIT, Godfreey-Igwe has learned more about her Igbo cultural heritage and hopes to understand what the future of climate change poses for the culture’s sustainability. Godfreey-Igwe herself is the “Ada” – or eldest child – in her family, a role that carries a responsibility for keeping her family’s culture alive. That sense of responsibility, to her community and to future generations, has stayed with her at MIT.

    For Independent Activities Period during her first year at the Institute, Godfreey-Igwe traveled to Kazakhstan through MIT’s Global Teaching Labs. As a student teacher, she taught Kazakh high school chemistry students about polymers and the impact plastic materials can have on the Earth’s climate. She was also an MIT International Science and Technology Initiatives (MISTI) Identity X Ambassador during her time there, blogging about her experiences as a Black woman in the country. She saw the role as an opportunity to shed light on the challenges of navigating her identity abroad, with hopes of fostering community through her posts.

    The following summer, Godfreey-Igwe interned for the Saathi Biodegradable Sanitary Napkins Startup in Ahmedabad, India. During her time there, she researched and wrote articles focused on educating the public about the benefits eco-friendly sanitary pads posed to public health and the environment. She also interviewed a director for the city’s Center for Environmental Education, about the importance of uplifting and supporting marginalized communities hit hardest by climate change. The conversation was eye-opening for Godfreey-Igwe; she saw not only how complex the process of mitigating climate change was, but also how diverse the solutions needed to be.

    She has also pursued her interest in plastics and sustainability through summer research projects. In of the summer of 2020, Godfreey-Igwe worked under a lab in Stanford University’s civil and environmental engineering department to create and design models maximizing the efficiency of bacterial processes leading to the creation of bioplastics. The project’s goal was to find a sustainable form of plastic breakdown for future applications in the environment.  She presented her research at the Harvard National Collegiate Research Conference and received a presentation award during the MIT Mechanical Engineering Research Exhibition. This past summer, she was awarded a grant through the NSF Center for Sustainable Polymers at the University of Minnesota to work on a research project seeking to understand microplastic generation.

    Ultimately, Godfreey-Igwe recognizes that to propose thoughtful solutions to climate issues, the people hit hardest must be a part of the conversation. For her, a key way to bring more people into conversations about sustainability and inclusion is through mentorship. This role is especially meaningful to Godfreey-Igwe because she knows firsthand how important for members of underrepresented groups to feel supported at a place like MIT. “The experience of coming to an institution like MIT, as someone who is low-income or of color, can be isolating. Especially if you feel like there are people who can’t relate to your background,” she says.

    Godfreey-Igwe is a member of Active Community Engagement FPOP (ACE), a social action group on campus that engages with local communities through public service work. Initially joining as a participant, Godfreey-Igwe became a counselor and then coordinator; she facilitates social action workshops and introduces students to service opportunities both at MIT and around Boston. She says her time in ACE has helped build her confidence in her abilities as a leader, mentor, and cultivator of inclusionary spaces. She is also a member of iHouse (International Development House), where she served for three years as the housing and service co-chair.

    Godfreey-Igwe also tutors one-on-one for Tutoring Plus in Cambridge, where since her first year she has provided mentorship and STEM tutoring to a low-income, high school student of color. Last spring, she was awarded the Tutoring Plus of Cambridge Unwavering Service Award for her service and commitment to the program.

    Looking ahead, Godfreey-Igwe hopes to use the skills learned from her mentorship and leadership roles to establish greater structures for collaboration on climate mitigation technologies, ideas, and practices. Focusing on mentoring young scientists of color, she wants to build up underprivileged groups and institutions for sustainable climate change research, ensuring everyone has a voice in the ongoing conversation.

    “In all this work, I’m hoping to make sure that globally marginalized communities are more visible in climate-related spaces, both in terms of who is doing the engineering and who the engineering works for,” she says. More

  • in

    Selective separation could help alleviate critical metals shortage

    New processing methods developed by MIT researchers could help ease looming shortages of the essential metals that power everything from phones to automotive batteries, by making it easier to separate these rare metals from mining ores and recycled materials.

    Selective adjustments within a chemical process called sulfidation allowed professor of metallurgy Antoine Allanore and his graduate student Caspar Stinn to successfully target and separate rare metals, such as the cobalt in a lithium-ion battery, from mixed-metal materials.

    As they report in the journal Nature, their processing techniques allow the metals to remain in solid form and be separated without dissolving the material. This avoids traditional but costly liquid separation methods that require significant energy. The researchers developed processing conditions for 56 elements and tested these conditions on 15 elements.

    Their sulfidation approach, they write in the paper, could reduce the capital costs of metal separation between 65 and 95 percent from mixed-metal oxides. Their selective processing could also reduce greenhouse gas emissions by 60 to 90 percent compared to traditional liquid-based separation.

    “We were excited to find replacements for processes that had really high levels of water usage and greenhouse gas emissions, such as lithium-ion battery recycling, rare-earth magnet recycling, and rare-earth separation,” says Stinn. “Those are processes that make materials for sustainability applications, but the processes themselves are very unsustainable.”

    The findings offer one way to alleviate a growing demand for minor metals like cobalt, lithium, and rare earth elements that are used in “clean” energy products like electric cars, solar cells, and electricity-generating windmills. According to a 2021 report by the International Energy Agency, the average amount of minerals needed for a new unit of power generation capacity has risen by 50 percent since 2010, as renewable energy technologies using these metals expand their reach.

    Opportunity for selectivity

    For more than a decade, the Allanore group has been studying the use of sulfide materials in developing new electrochemical routes for metal production. Sulfides are common materials, but the MIT scientists are experimenting with them under extreme conditions like very high temperatures — from 800 to 3,000 degrees Fahrenheit — that are used in manufacturing plants but not in a typical university lab.

    “We are looking at very well-established materials in conditions that are uncommon compared to what has been done before,” Allanore explains, “and that is why we are finding new applications or new realities.”

    In the process of synthetizing high-temperature sulfide materials to support electrochemical production, Stinn says, “we learned we could be very selective and very controlled about what products we made. And it was with that understanding that we realized, ‘OK, maybe there’s an opportunity for selectivity in separation here.’”

    The chemical reaction exploited by the researchers reacts a material containing a mix of metal oxides to form new metal-sulfur compounds or sulfides. By altering factors like temperature, gas pressure, and the addition of carbon in the reaction process, Stinn and Allanore found that they could selectively create a variety of sulfide solids that can be physically separated by a variety of methods, including crushing the material and sorting different-sized sulfides or using magnets to separate different sulfides from one another.

    Current methods of rare metal separation rely on large quantities of energy, water, acids, and organic solvents which have costly environmental impacts, says Stinn. “We are trying to use materials that are abundant, economical, and readily available for sustainable materials separation, and we have expanded that domain to now include sulfur and sulfides.”

    Stinn and Allanore used selective sulfidation to separate out economically important metals like cobalt in recycled lithium-ion batteries. They also used their techniques to separate dysprosium — a rare-earth element used in applications ranging from data storage devices to optoelectronics — from rare-earth-boron magnets, or from the typical mixture of oxides available from mining minerals such as bastnaesite.

    Leveraging existing technology

    Metals like cobalt and rare earths are only found in small amounts in mined materials, so industries must process large volumes of material to retrieve or recycle enough of these metals to be economically viable, Allanore explains. “It’s quite clear that these processes are not efficient. Most of the emissions come from the lack of selectivity and the low concentration at which they operate.”

    By eliminating the need for liquid separation and the extra steps and materials it requires to dissolve and then reprecipitate individual elements, the MIT researchers’ process significantly reduces the costs incurred and emissions produced during separation.

    “One of the nice things about separating materials using sulfidation is that a lot of existing technology and process infrastructure can be leveraged,” Stinn says. “It’s new conditions and new chemistries in established reactor styles and equipment.”

    The next step is to show that the process can work for large amounts of raw material — separating out 16 elements from rare-earth mining streams, for example. “Now we have shown that we can handle three or four or five of them together, but we have not yet processed an actual stream from an existing mine at a scale to match what’s required for deployment,” Allanore says.

    Stinn and colleagues in the lab have built a reactor that can process about 10 kilograms of raw material per day, and the researchers are starting conversations with several corporations about the possibilities.

    “We are discussing what it would take to demonstrate the performance of this approach with existing mineral and recycling streams,” Allanore says.

    This research was supported by the U.S. Department of Energy and the U.S. National Science Foundation. More

  • in

    A tool to speed development of new solar cells

    In the ongoing race to develop ever-better materials and configurations for solar cells, there are many variables that can be adjusted to try to improve performance, including material type, thickness, and geometric arrangement. Developing new solar cells has generally been a tedious process of making small changes to one of these parameters at a time. While computational simulators have made it possible to evaluate such changes without having to actually build each new variation for testing, the process remains slow.

    Now, researchers at MIT and Google Brain have developed a system that makes it possible not just to evaluate one proposed design at a time, but to provide information about which changes will provide the desired improvements. This could greatly increase the rate for the discovery of new, improved configurations.

    The new system, called a differentiable solar cell simulator, is described in a paper published today in the journal Computer Physics Communications, written by MIT junior Sean Mann, research scientist Giuseppe Romano of MIT’s Institute for Soldier Nanotechnologies, and four others at MIT and at Google Brain.

    Traditional solar cell simulators, Romano explains, take the details of a solar cell configuration and produce as their output a predicted efficiency — that is, what percentage of the energy of incoming sunlight actually gets converted to an electric current. But this new simulator both predicts the efficiency and shows how much that output is affected by any one of the input parameters. “It tells you directly what happens to the efficiency if we make this layer a little bit thicker, or what happens to the efficiency if we for example change the property of the material,” he says.

    In short, he says, “we didn’t discover a new device, but we developed a tool that will enable others to discover more quickly other higher performance devices.” Using this system, “we are decreasing the number of times that we need to run a simulator to give quicker access to a wider space of optimized structures.” In addition, he says, “our tool can identify a unique set of material parameters that has been hidden so far because it’s very complex to run those simulations.”

    While traditional approaches use essentially a random search of possible variations, Mann says, with his tool “we can follow a trajectory of change because the simulator tells you what direction you want to be changing your device. That makes the process much faster because instead of exploring the entire space of opportunities, you can just follow a single path” that leads directly to improved performance.

    Since advanced solar cells often are composed of multiple layers interlaced with conductive materials to carry electric charge from one to the other, this computational tool reveals how changing the relative thicknesses of these different layers will affect the device’s output. “This is very important because the thickness is critical. There is a strong interplay between light propagation and the thickness of each layer and the absorption of each layer,” Mann explains.

    Other variables that can be evaluated include the amount of doping (the introduction of atoms of another element) that each layer receives, or the dielectric constant of insulating layers, or the bandgap, a measure of the energy levels of photons of light that can be captured by different materials used in the layers.

    This simulator is now available as an open-source tool that can be used immediately to help guide research in this field, Romano says. “It is ready, and can be taken up by industry experts.” To make use of it, researchers would couple this device’s computations with an optimization algorithm, or even a machine learning system, to rapidly assess a wide variety of possible changes and home in quickly on the most promising alternatives.

    At this point, the simulator is based on just a one-dimensional version of the solar cell, so the next step will be to expand its capabilities to include two- and three-dimensional configurations. But even this 1D version “can cover the majority of cells that are currently under production,” Romano says. Certain variations, such as so-called tandem cells using different materials, cannot yet be simulated directly by this tool, but “there are ways to approximate a tandem solar cell by simulating each of the individual cells,” Mann says.

    The simulator is “end-to-end,” Romano says, meaning it computes the sensitivity of the efficiency, also taking into account light absorption. He adds: “An appealing future direction is composing our simulator with advanced existing differentiable light-propagation simulators, to achieve enhanced accuracy.”

    Moving forward, Romano says, because this is an open-source code, “that means that once it’s up there, the community can contribute to it. And that’s why we are really excited.” Although this research group is “just a handful of people,” he says, now anyone working in the field can make their own enhancements and improvements to the code and introduce new capabilities.

    “Differentiable physics is going to provide new capabilities for the simulations of engineered systems,” says Venkat Viswanathan, an associate professor of mechanical engineering at Carnegie Mellon University, who was not associated with this work. “The  differentiable solar cell simulator is an incredible example of differentiable physics, that can now provide new capabilities to optimize solar cell device performance,” he says, calling the study “an exciting step forward.”

    In addition to Mann and Romano, the team included Eric Fadel and Steven Johnson at MIT, and Samuel Schoenholz and Ekin Cubuk at Google Brain. The work was supported in part by Eni S.p.A. and the MIT Energy Initiative, and the MIT Quest for Intelligence. More

  • in

    Q&A: More-sustainable concrete with machine learning

    As a building material, concrete withstands the test of time. Its use dates back to early civilizations, and today it is the most popular composite choice in the world. However, it’s not without its faults. Production of its key ingredient, cement, contributes 8-9 percent of the global anthropogenic CO2 emissions and 2-3 percent of energy consumption, which is only projected to increase in the coming years. With aging United States infrastructure, the federal government recently passed a milestone bill to revitalize and upgrade it, along with a push to reduce greenhouse gas emissions where possible, putting concrete in the crosshairs for modernization, too.

    Elsa Olivetti, the Esther and Harold E. Edgerton Associate Professor in the MIT Department of Materials Science and Engineering, and Jie Chen, MIT-IBM Watson AI Lab research scientist and manager, think artificial intelligence can help meet this need by designing and formulating new, more sustainable concrete mixtures, with lower costs and carbon dioxide emissions, while improving material performance and reusing manufacturing byproducts in the material itself. Olivetti’s research improves environmental and economic sustainability of materials, and Chen develops and optimizes machine learning and computational techniques, which he can apply to materials reformulation. Olivetti and Chen, along with their collaborators, have recently teamed up for an MIT-IBM Watson AI Lab project to make concrete more sustainable for the benefit of society, the climate, and the economy.

    Q: What applications does concrete have, and what properties make it a preferred building material?

    Olivetti: Concrete is the dominant building material globally with an annual consumption of 30 billion metric tons. That is over 20 times the next most produced material, steel, and the scale of its use leads to considerable environmental impact, approximately 5-8 percent of global greenhouse gas (GHG) emissions. It can be made locally, has a broad range of structural applications, and is cost-effective. Concrete is a mixture of fine and coarse aggregate, water, cement binder (the glue), and other additives.

    Q: Why isn’t it sustainable, and what research problems are you trying to tackle with this project?

    Olivetti: The community is working on several ways to reduce the impact of this material, including alternative fuels use for heating the cement mixture, increasing energy and materials efficiency and carbon sequestration at production facilities, but one important opportunity is to develop an alternative to the cement binder.

    While cement is 10 percent of the concrete mass, it accounts for 80 percent of the GHG footprint. This impact is derived from the fuel burned to heat and run the chemical reaction required in manufacturing, but also the chemical reaction itself releases CO2 from the calcination of limestone. Therefore, partially replacing the input ingredients to cement (traditionally ordinary Portland cement or OPC) with alternative materials from waste and byproducts can reduce the GHG footprint. But use of these alternatives is not inherently more sustainable because wastes might have to travel long distances, which adds to fuel emissions and cost, or might require pretreatment processes. The optimal way to make use of these alternate materials will be situation-dependent. But because of the vast scale, we also need solutions that account for the huge volumes of concrete needed. This project is trying to develop novel concrete mixtures that will decrease the GHG impact of the cement and concrete, moving away from the trial-and-error processes towards those that are more predictive.

    Chen: If we want to fight climate change and make our environment better, are there alternative ingredients or a reformulation we could use so that less greenhouse gas is emitted? We hope that through this project using machine learning we’ll be able to find a good answer.

    Q: Why is this problem important to address now, at this point in history?

    Olivetti: There is urgent need to address greenhouse gas emissions as aggressively as possible, and the road to doing so isn’t necessarily straightforward for all areas of industry. For transportation and electricity generation, there are paths that have been identified to decarbonize those sectors. We need to move much more aggressively to achieve those in the time needed; further, the technological approaches to achieve that are more clear. However, for tough-to-decarbonize sectors, such as industrial materials production, the pathways to decarbonization are not as mapped out.

    Q: How are you planning to address this problem to produce better concrete?

    Olivetti: The goal is to predict mixtures that will both meet performance criteria, such as strength and durability, with those that also balance economic and environmental impact. A key to this is to use industrial wastes in blended cements and concretes. To do this, we need to understand the glass and mineral reactivity of constituent materials. This reactivity not only determines the limit of the possible use in cement systems but also controls concrete processing, and the development of strength and pore structure, which ultimately control concrete durability and life-cycle CO2 emissions.

    Chen: We investigate using waste materials to replace part of the cement component. This is something that we’ve hypothesized would be more sustainable and economic — actually waste materials are common, and they cost less. Because of the reduction in the use of cement, the final concrete product would be responsible for much less carbon dioxide production. Figuring out the right concrete mixture proportion that makes endurable concretes while achieving other goals is a very challenging problem. Machine learning is giving us an opportunity to explore the advancement of predictive modeling, uncertainty quantification, and optimization to solve the issue. What we are doing is exploring options using deep learning as well as multi-objective optimization techniques to find an answer. These efforts are now more feasible to carry out, and they will produce results with reliability estimates that we need to understand what makes a good concrete.

    Q: What kinds of AI and computational techniques are you employing for this?

    Olivetti: We use AI techniques to collect data on individual concrete ingredients, mix proportions, and concrete performance from the literature through natural language processing. We also add data obtained from industry and/or high throughput atomistic modeling and experiments to optimize the design of concrete mixtures. Then we use this information to develop insight into the reactivity of possible waste and byproduct materials as alternatives to cement materials for low-CO2 concrete. By incorporating generic information on concrete ingredients, the resulting concrete performance predictors are expected to be more reliable and transformative than existing AI models.

    Chen: The final objective is to figure out what constituents, and how much of each, to put into the recipe for producing the concrete that optimizes the various factors: strength, cost, environmental impact, performance, etc. For each of the objectives, we need certain models: We need a model to predict the performance of the concrete (like, how long does it last and how much weight does it sustain?), a model to estimate the cost, and a model to estimate how much carbon dioxide is generated. We will need to build these models by using data from literature, from industry, and from lab experiments.

    We are exploring Gaussian process models to predict the concrete strength, going forward into days and weeks. This model can give us an uncertainty estimate of the prediction as well. Such a model needs specification of parameters, for which we will use another model to calculate. At the same time, we also explore neural network models because we can inject domain knowledge from human experience into them. Some models are as simple as multi-layer perceptions, while some are more complex, like graph neural networks. The goal here is that we want to have a model that is not only accurate but also robust — the input data is noisy, and the model must embrace the noise, so that its prediction is still accurate and reliable for the multi-objective optimization.

    Once we have built models that we are confident with, we will inject their predictions and uncertainty estimates into the optimization of multiple objectives, under constraints and under uncertainties.

    Q: How do you balance cost-benefit trade-offs?

    Chen: The multiple objectives we consider are not necessarily consistent, and sometimes they are at odds with each other. The goal is to identify scenarios where the values for our objectives cannot be further pushed simultaneously without compromising one or a few. For example, if you want to further reduce the cost, you probably have to suffer the performance or suffer the environmental impact. Eventually, we will give the results to policymakers and they will look into the results and weigh the options. For example, they may be able to tolerate a slightly higher cost under a significant reduction in greenhouse gas. Alternatively, if the cost varies little but the concrete performance changes drastically, say, doubles or triples, then this is definitely a favorable outcome.

    Q: What kinds of challenges do you face in this work?

    Chen: The data we get either from industry or from literature are very noisy; the concrete measurements can vary a lot, depending on where and when they are taken. There are also substantial missing data when we integrate them from different sources, so, we need to spend a lot of effort to organize and make the data usable for building and training machine learning models. We also explore imputation techniques that substitute missing features, as well as models that tolerate missing features, in our predictive modeling and uncertainty estimate.

    Q: What do you hope to achieve through this work?

    Chen: In the end, we are suggesting either one or a few concrete recipes, or a continuum of recipes, to manufacturers and policymakers. We hope that this will provide invaluable information for both the construction industry and for the effort of protecting our beloved Earth.

    Olivetti: We’d like to develop a robust way to design cements that make use of waste materials to lower their CO2 footprint. Nobody is trying to make waste, so we can’t rely on one stream as a feedstock if we want this to be massively scalable. We have to be flexible and robust to shift with feedstocks changes, and for that we need improved understanding. Our approach to develop local, dynamic, and flexible alternatives is to learn what makes these wastes reactive, so we know how to optimize their use and do so as broadly as possible. We do that through predictive model development through software we have developed in my group to automatically extract data from literature on over 5 million texts and patents on various topics. We link this to the creative capabilities of our IBM collaborators to design methods that predict the final impact of new cements. If we are successful, we can lower the emissions of this ubiquitous material and play our part in achieving carbon emissions mitigation goals.

    Other researchers involved with this project include Stefanie Jegelka, the X-Window Consortium Career Development Associate Professor in the MIT Department of Electrical Engineering and Computer Science; Richard Goodwin, IBM principal researcher; Soumya Ghosh, MIT-IBM Watson AI Lab research staff member; and Kristen Severson, former research staff member. Collaborators included Nghia Hoang, former research staff member with MIT-IBM Watson AI Lab and IBM Research; and Jeremy Gregory, research scientist in the MIT Department of Civil and Environmental Engineering and executive director of the MIT Concrete Sustainability Hub.

    This research is supported by the MIT-IBM Watson AI Lab. More

  • in

    An energy-storage solution that flows like soft-serve ice cream

    Batteries made from an electrically conductive mixture the consistency of molasses could help solve a critical piece of the decarbonization puzzle. An interdisciplinary team from MIT has found that an electrochemical technology called a semisolid flow battery can be a cost-competitive form of energy storage and backup for variable renewable energy (VRE) sources such as wind and solar. The group’s research is described in a paper published in Joule.

    “The transition to clean energy requires energy storage systems of different durations for when the sun isn’t shining and the wind isn’t blowing,” says Emre Gençer, a research scientist with the MIT Energy Initiative (MITEI) and a member of the team. “Our work demonstrates that a semisolid flow battery could be a lifesaving as well as economical option when these VRE sources can’t generate power for a day or longer — in the case of natural disasters, for instance.”

    The rechargeable zinc-manganese dioxide (Zn-MnO2) battery the researchers created beat out other long-duration energy storage contenders. “We performed a comprehensive, bottom-up analysis to understand how the battery’s composition affects performance and cost, looking at all the trade-offs,” says Thaneer Malai Narayanan SM ’18, PhD ’21. “We showed that our system can be cheaper than others, and can be scaled up.”

    Narayanan, who conducted this work at MIT as part of his doctorate in mechanical engineering, is the lead author of the paper. Additional authors include Gençer, Yunguang Zhu, a postdoc in the MIT Electrochemical Energy Lab; Gareth McKinley, the School of Engineering Professor of Teaching Innovation and professor of mechanical engineering at MIT; and Yang Shao-Horn, the JR East Professor of Engineering, a professor of mechanical engineering and of materials science and engineering, and a member of the Research Laboratory of Electronics (RLE), who directs the MIT Electrochemical Energy Lab.

    Going with the flow

    In 2016, Narayanan began his graduate studies, joining the Electrochemical Energy Lab, a hotbed of research and exploration of solutions to mitigate climate change, which is centered on innovative battery chemistry and decarbonizing fuels and chemicals. One exciting opportunity for the lab: developing low- and no-carbon backup energy systems suitable for grid-scale needs when VRE generation flags.                                                  

    While the lab cast a wide net, investigating energy conversion and storage using solid oxide fuel cells, lithium-ion batteries, and metal-air batteries, among others, Narayanan took a particular interest in flow batteries. In these systems, two different chemical (electrolyte) solutions with either negative or positive ions are pumped from separate tanks, meeting across a membrane (called the stack). Here, the ion streams react, converting electrical energy to chemical energy — in effect, charging the battery. When there is demand for this stored energy, the solution gets pumped back to the stack to convert chemical energy into electrical energy again.

    The duration of time that flow batteries can discharge, releasing the stored electricity, is determined by the volume of positively and negatively charged electrolyte solutions streaming through the stack. In theory, as long as these solutions keep flowing, reacting, and converting the chemical energy to electrical energy, the battery systems can provide electricity.

    “For backup lasting more than a day, the architecture of flow batteries suggests they can be a cheap option,” says Narayanan. “You recharge the solution in the tanks from sun and wind power sources.” This renders the entire system carbon free.

    But while the promise of flow battery technologies has beckoned for at least a decade, the uneven performance and expense of materials required for these battery systems has slowed their implementation. So, Narayanan set out on an ambitious journey: to design and build a flow battery that could back up VRE systems for a day or more, storing and discharging energy with the same or greater efficiency than backup rivals; and to determine, through rigorous cost analysis, whether such a system could prove economically viable as a long-duration energy option.

    Multidisciplinary collaborators

    To attack this multipronged challenge, Narayanan’s project brought together, in his words, “three giants, scientists all well-known in their fields”:  Shao-Horn, who specializes in chemical physics and electrochemical science, and design of materials; Gençer, who creates detailed economic models of emergent energy systems at MITEI; and McKinley, an expert in rheology, the physics of flow. These three also served as his thesis advisors.

    “I was excited to work in such an interdisciplinary team, which offered a unique opportunity to create a novel battery architecture by designing charge transfer and ion transport within flowable semi-solid electrodes, and to guide battery engineering using techno-economics of such flowable batteries,” says Shao-Horn.

    While other flow battery systems in contention, such as the vanadium redox flow battery, offer the storage capacity and energy density to back up megawatt and larger power systems, they depend on expensive chemical ingredients that make them bad bets for long duration purposes. Narayanan was on the hunt for less-pricey chemical components that also feature rich energy potential.

    Through a series of bench experiments, the researchers came up with a novel electrode (electrical conductor) for the battery system: a mixture containing dispersed manganese dioxide (MnO2) particles, shot through with an electrically conductive additive, carbon black. This compound reacts with a conductive zinc solution or zinc plate at the stack, enabling efficient electrochemical energy conversion. The fluid properties of this battery are far removed from the watery solutions used by other flow batteries.

    “It’s a semisolid — a slurry,” says Narayanan. “Like thick, black paint, or perhaps a soft-serve ice cream,” suggests McKinley. The carbon black adds the pigment and the electric punch. To arrive at the optimal electrochemical mix, the researchers tweaked their formula many times.

    “These systems have to be able to flow under reasonable pressures, but also have a weak yield stress so that the active MnO2 particles don’t sink to the bottom of the flow tanks when the system isn’t being used, as well as not separate into a battery/oily clear fluid phase and a dense paste of carbon particles and MnO2,” says McKinley.

    This series of experiments informed the technoeconomic analysis. By “connecting the dots between composition, performance, and cost,” says Narayanan, he and Gençer were able to make system-level cost and efficiency calculations for the Zn-MnO2 battery.

    “Assessing the cost and performance of early technologies is very difficult, and this was an example of how to develop a standard method to help researchers at MIT and elsewhere,” says Gençer. “One message here is that when you include the cost analysis at the development stage of your experimental work, you get an important early understanding of your project’s cost implications.”

    In their final round of studies, Gençer and Narayanan compared the Zn-MnO2 battery to a set of equivalent electrochemical battery and hydrogen backup systems, looking at the capital costs of running them at durations of eight, 24, and 72 hours. Their findings surprised them: For battery discharges longer than a day, their semisolid flow battery beat out lithium-ion batteries and vanadium redox flow batteries. This was true even when factoring in the heavy expense of pumping the MnO2 slurry from tank to stack. “I was skeptical, and not expecting this battery would be competitive, but once I did the cost calculation, it was plausible,” says Gençer.

    But carbon-free battery backup is a very Goldilocks-like business: Different situations require different-duration solutions, whether an anticipated overnight loss of solar power, or a longer-term, climate-based disruption in the grid. “Lithium-ion is great for backup of eight hours and under, but the materials are too expensive for longer periods,” says Gençer. “Hydrogen is super expensive for very short durations, and good for very long durations, and we will need all of them.” This means it makes sense to continue working on the Zn-MnO2 system to see where it might fit in.

    “The next step is to take our battery system and build it up,” says Narayanan, who is working now as a battery engineer. “Our research also points the way to other chemistries that could be developed under the semi-solid flow battery platform, so we could be seeing this kind of technology used for energy storage in our lifetimes.”

    This research was supported by Eni S.p.A. through MITEI. Thaneer Malai Narayanan received an Eni-sponsored MIT Energy Fellowship during his work on the project. More

  • in

    SMART researchers develop method for early detection of bacterial infection in crops

    Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) Interdisciplinary Research Group (IRG) ofSingapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, and their local collaborators from Temasek Life Sciences Laboratory (TLL), have developed a rapid Raman spectroscopy-based method for detecting and quantifying early bacterial infection in crops. The Raman spectral biomarkers and diagnostic algorithm enable the noninvasive and early diagnosis of bacterial infections in crop plants, which can be critical for the progress of plant disease management and agricultural productivity.

    Due to the increasing demand for global food supply and security, there is a growing need to improve agricultural production systems and increase crop productivity. Globally, bacterial pathogen infection in crop plants is one of the major contributors to agricultural yield losses. Climate change also adds to the problem by accelerating the spread of plant diseases. Hence, developing methods for rapid and early detection of pathogen-infected crops is important to improve plant disease management and reduce crop loss.

    The breakthrough by SMART and TLL researchers offers a faster and more accurate method to detect bacterial infection in crop plants at an earlier stage, as compared to existing techniques. The new results appear in a paper titled “Rapid detection and quantification of plant innate immunity response using Raman spectroscopy” published in the journal Frontiers in Plant Science.

    “The early detection of pathogen-infected crop plants is a significant step to improve plant disease management,” says Chua Nam Hai, DiSTAP co-lead principal investigator, professor, TLL deputy chair, and co-corresponding author. “It will allow the fast and selective removal of pathogen load and curb the further spread of disease to other neighboring crops.”

    Traditionally, plant disease diagnosis involves a simple visual inspection of plants for disease symptoms and severity. “Visual inspection methods are often ineffective, as disease symptoms usually manifest only at relatively later stages of infection, when the pathogen load is already high and reparative measures are limited. Hence, new methods are required for rapid and early detection of bacterial infection. The idea would be akin to having medical tests to identify human diseases at an early stage, instead of waiting for visual symptoms to show, so that early intervention or treatment can be applied,” says MIT Professor Rajeev Ram, who is a DiSTAP principal investigator and co-corresponding author on the paper.

    While existing techniques, such as current molecular detection methods, can detect bacterial infection in plants, they are often limited in their use. Molecular detection methods largely depend on the availability of pathogen-specific gene sequences or antibodies to identify bacterial infection in crops; the implementation is also time-consuming and nonadaptable for on-site field application due to the high cost and bulky equipment required, making it impractical for use in agricultural farms.

    “At DiSTAP, we have developed a quantitative Raman spectroscopy-based algorithm that can help farmers to identify bacterial infection rapidly. The developed diagnostic algorithm makes use of Raman spectral biomarkers and can be easily implemented in cloud-based computing and prediction platforms. It is more effective than existing techniques as it enables accurate identification and early detection of bacterial infection, both of which are crucial to saving crop plants that would otherwise be destroyed,” explains Gajendra Pratap Singh, scientific director and principal investigator at DiSTAP and co-lead author.

    A portable Raman system can be used on farms and provides farmers with an accurate and simple yes-or-no response when used to test for the presence of bacterial infections in crops. The development of this rapid and noninvasive method could improve plant disease management and have a transformative impact on agricultural farms by efficiently reducing agricultural yield loss and increasing productivity.

    “Using the diagnostic algorithm method, we experimented on several edible plants such as choy sum,” says DiSTAP and TLL principal investigator and co-corresponding author Rajani Sarojam. “The results showed that the Raman spectroscopy-based method can swiftly detect and quantify innate immunity response in plants infected with bacterial pathogens. We believe that this technology will be beneficial for agricultural farms to increase their productivity by reducing their yield loss due to plant diseases.”

    The researchers are currently working on the development of high-throughput, custom-made portable or hand-held Raman spectrometers that will allow Raman spectral analysis to be quickly and easily performed on field-grown crops.

    SMART and TLL developed and discovered the diagnostic algorithm and Raman spectral biomarkers. TLL also confirmed and validated the detection method through mutant plants. The research is carried out by SMART and supported by the National Research Foundation of Singapore under its Campus for Research Excellence And Technological Enterprise (CREATE) program.

    SMART was established by MIT and the NRF in 2007. The first entity in CREATE developed by NRF, SMART serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research projects in areas of interest to both Singapore and MIT. SMART currently comprises an Innovation Center and five IRGs: Antimicrobial Resistance, Critical Analytics for Manufacturing Personalized-Medicine, DiSTAP, Future Urban Mobility, and Low Energy Electronic Systems. SMART research is funded by the NRF under the CREATE program.

    Led by Professor Michael Strano of MIT and Professor Chua Nam Hai of Temasek Lifesciences Laboratory, the DiSTAP program addresses deep problems in food production in Singapore and the world by developing a suite of impactful and novel analytical, genetic, and biomaterial technologies. The goal is to fundamentally change how plant biosynthetic pathways are discovered, monitored, engineered, and ultimately translated to meet the global demand for food and nutrients. Scientists from MIT, TTL, Nanyang Technological University, and National University of Singapore are collaboratively developing new tools for the continuous measurement of important plant metabolites and hormones for novel discovery, deeper understanding and control of plant biosynthetic pathways in ways not yet possible, especially in the context of green leafy vegetables; leveraging these new techniques to engineer plants with highly desirable properties for global food security, including high-yield density production, and drought and pathogen resistance; and applying these technologies to improve urban farming. More

  • in

    Timber or steel? Study helps builders reduce carbon footprint of truss structures

    Buildings are a big contributor to global warming, not just in their ongoing operations but in the materials used in their construction. Truss structures — those crisscross arrays of diagonal struts used throughout modern construction, in everything from antenna towers to support beams for large buildings — are typically made of steel or wood or a combination of both. But little quantitative research has been done on how to pick the right materials to minimize these structures’ contribution global warming.

    The “embodied carbon” in a construction material includes the fuel used in the material’s production (for mining and smelting steel, for example, or for felling and processing trees) and in transporting the materials to a site. It also includes the equipment used for the construction itself.

    Now, researchers at MIT have done a detailed analysis and created a set of computational tools to enable architects and engineers to design truss structures in a way that can minimize their embodied carbon while maintaining all needed properties for a given building application. While in general wood produces a much lower carbon footprint, using steel in places where its properties can provide maximum benefit can provide an optimized result, they say.

    The analysis is described in a paper published today in the journal Engineering Structures, by graduate student Ernest Ching and MIT assistant professor of civil and environmental engineering Josephine Carstensen.

    “Construction is a huge greenhouse gas emitter that has kind of been flying under the radar for the past decades,” says Carstensen. But in recent years building designers “are starting to be more focused on how to not just reduce the operating energy associated with building use, but also the important carbon associated with the structure itself.” And that’s where this new analysis comes in.

    The two main options in reducing the carbon emissions associated with truss structures, she says, are substituting materials or changing the structure. However, there has been “surprisingly little work” on tools to help designers figure out emissions-minimizing strategies for a given situation, she says.

    The new system makes use of a technique called topology optimization, which allows for the input of basic parameters, such as the amount of load to be supported and the dimensions of the structure, and can be used to produce designs optimized for different characteristics, such as weight, cost, or, in this case, global warming impact.

    Wood performs very well under forces of compression, but not as well as steel when it comes to tension — that is, a tendency to pull the structure apart. Carstensen says that in general, wood is far better than steel in terms of embedded carbon, so “especially if you have a structure that doesn’t have any tension, then you should definitely only use timber” in order to minimize emissions. One tradeoff is that “the weight of the structure is going to be bigger than it would be with steel,” she says.

    The tools they developed, which were the basis for Ching’s master’s thesis, can be applied at different stages, either in the early planning phase of a structure, or later on in the final stages of a design.

    As an exercise, the team developed a proposal for reengineering several trusses using these optimization tools, and demonstrated that a significant savings in embodied greenhouse gas emissions could be achieved with no loss of performance. While they have shown improvements of at least 10 percent can be achieved, she says those estimates are “not exactly apples to apples” and likely savings could actually be two to three times that.

    “It’s about choosing materials more smartly,” she says, for the specifics of a given application. Often in existing buildings “you will have timber where there’s compression, and where that makes sense, and then it will have really skinny steel members, in tension, where that makes sense. And that’s also what we see in our design solutions that are suggested, but perhaps we can see it even more clearly.” The tools are not ready for commercial use though, she says, because they haven’t yet added a user interface.

    Carstensen sees a trend to increasing use of timber in large construction, which represents an important potential for reducing the world’s overall carbon emissions. “There’s a big interest in the construction industry in mass timber structures, and this speaks right into that area. So, the hope is that this would make inroads into the construction business and actually make a dent in that very large contribution to greenhouse gas emissions.” More

  • in

    Design’s new frontier

    In the 1960s, the advent of computer-aided design (CAD) sparked a revolution in design. For his PhD thesis in 1963, MIT Professor Ivan Sutherland developed Sketchpad, a game-changing software program that enabled users to draw, move, and resize shapes on a computer. Over the course of the next few decades, CAD software reshaped how everything from consumer products to buildings and airplanes were designed.

    “CAD was part of the first wave in computing in design. The ability of researchers and practitioners to represent and model designs using computers was a major breakthrough and still is one of the biggest outcomes of design research, in my opinion,” says Maria Yang, Gail E. Kendall Professor and director of MIT’s Ideation Lab.

    Innovations in 3D printing during the 1980s and 1990s expanded CAD’s capabilities beyond traditional injection molding and casting methods, providing designers even more flexibility. Designers could sketch, ideate, and develop prototypes or models faster and more efficiently. Meanwhile, with the push of a button, software like that developed by Professor Emeritus David Gossard of MIT’s CAD Lab could solve equations simultaneously to produce a new geometry on the fly.

    In recent years, mechanical engineers have expanded the computing tools they use to ideate, design, and prototype. More sophisticated algorithms and the explosion of machine learning and artificial intelligence technologies have sparked a second revolution in design engineering.

    Researchers and faculty at MIT’s Department of Mechanical Engineering are utilizing these technologies to re-imagine how the products, systems, and infrastructures we use are designed. These researchers are at the forefront of the new frontier in design.

    Computational design

    Faez Ahmed wants to reinvent the wheel, or at least the bicycle wheel. He and his team at MIT’s Design Computation & Digital Engineering Lab (DeCoDE) use an artificial intelligence-driven design method that can generate entirely novel and improved designs for a range of products — including the traditional bicycle. They create advanced computational methods to blend human-driven design with simulation-based design.

    “The focus of our DeCoDE lab is computational design. We are looking at how we can create machine learning and AI algorithms to help us discover new designs that are optimized based on specific performance parameters,” says Ahmed, an assistant professor of mechanical engineering at MIT.

    For their work using AI-driven design for bicycles, Ahmed and his collaborator Professor Daniel Frey wanted to make it easier to design customizable bicycles, and by extension, encourage more people to use bicycles over transportation methods that emit greenhouse gases.

    To start, the group gathered a dataset of 4,500 bicycle designs. Using this massive dataset, they tested the limits of what machine learning could do. First, they developed algorithms to group bicycles that looked similar together and explore the design space. They then created machine learning models that could successfully predict what components are key in identifying a bicycle style, such as a road bike versus a mountain bike.

    Once the algorithms were good enough at identifying bicycle designs and parts, the team proposed novel machine learning tools that could use this data to create a unique and creative design for a bicycle based on certain performance parameters and rider dimensions.

    Ahmed used a generative adversarial network — or GAN — as the basis of this model. GAN models utilize neural networks that can create new designs based on vast amounts of data. However, using GAN models alone would result in homogeneous designs that lack novelty and can’t be assessed in terms of performance. To address these issues in design problems, Ahmed has developed a new method which he calls “PaDGAN,” performance augmented diverse GAN.

    “When we apply this type of model, what we see is that we can get large improvements in the diversity, quality, as well as novelty of the designs,” Ahmed explains.

    Using this approach, Ahmed’s team developed an open-source computational design tool for bicycles freely available on their lab website. They hope to further develop a set of generalizable tools that can be used across industries and products.

    Longer term, Ahmed has his sights set on loftier goals. He hopes the computational design tools he develops could lead to “design democratization,” putting more power in the hands of the end user.

    “With these algorithms, you can have more individualization where the algorithm assists a customer in understanding their needs and helps them create a product that satisfies their exact requirements,” he adds.

    Using algorithms to democratize the design process is a goal shared by Stefanie Mueller, an associate professor in electrical engineering and computer science and mechanical engineering.

    Personal fabrication

    Platforms like Instagram give users the freedom to instantly edit their photographs or videos using filters. In one click, users can alter the palette, tone, and brightness of their content by applying filters that range from bold colors to sepia-toned or black-and-white. Mueller, X-Window Consortium Career Development Professor, wants to bring this concept of the Instagram filter to the physical world.

    “We want to explore how digital capabilities can be applied to tangible objects. Our goal is to bring reprogrammable appearance to the physical world,” explains Mueller, director of the HCI Engineering Group based out of MIT’s Computer Science and Artificial Intelligence Laboratory.

    Mueller’s team utilizes a combination of smart materials, optics, and computation to advance personal fabrication technologies that would allow end users to alter the design and appearance of the products they own. They tested this concept in a project they dubbed “Photo-Chromeleon.”

    First, a mix of photochromic cyan, magenta, and yellow dies are airbrushed onto an object — in this instance, a 3D sculpture of a chameleon. Using software they developed, the team sketches the exact color pattern they want to achieve on the object itself. An ultraviolet light shines on the object to activate the dyes.

    To actually create the physical pattern on the object, Mueller has developed an optimization algorithm to use alongside a normal office projector outfitted with red, green, and blue LED lights. These lights shine on specific pixels on the object for a given period of time to physically change the makeup of the photochromic pigments.

    “This fancy algorithm tells us exactly how long we have to shine the red, green, and blue light on every single pixel of an object to get the exact pattern we’ve programmed in our software,” says Mueller.

    Giving this freedom to the end user enables limitless possibilities. Mueller’s team has applied this technology to iPhone cases, shoes, and even cars. In the case of shoes, Mueller envisions a shoebox embedded with UV and LED light projectors. Users could put their shoes in the box overnight and the next day have a pair of shoes in a completely new pattern.

    Mueller wants to expand her personal fabrication methods to the clothes we wear. Rather than utilize the light projection technique developed in the PhotoChromeleon project, her team is exploring the possibility of weaving LEDs directly into clothing fibers, allowing people to change their shirt’s appearance as they wear it. These personal fabrication technologies could completely alter consumer habits.

    “It’s very interesting for me to think about how these computational techniques will change product design on a high level,” adds Mueller. “In the future, a consumer could buy a blank iPhone case and update the design on a weekly or daily basis.”

    Computational fluid dynamics and participatory design

    Another team of mechanical engineers, including Sili Deng, the Brit (1961) & Alex (1949) d’Arbeloff Career Development Professor, are developing a different kind of design tool that could have a large impact on individuals in low- and middle-income countries across the world.

    As Deng walked down the hallway of Building 1 on MIT’s campus, a monitor playing a video caught her eye. The video featured work done by mechanical engineers and MIT D-Lab on developing cleaner burning briquettes for cookstoves in Uganda. Deng immediately knew she wanted to get involved.

    “As a combustion scientist, I’ve always wanted to work on such a tangible real-world problem, but the field of combustion tends to focus more heavily on the academic side of things,” explains Deng.

    After reaching out to colleagues in MIT D-Lab, Deng joined a collaborative effort to develop a new cookstove design tool for the 3 billion people across the world who burn solid fuels to cook and heat their homes. These stoves often emit soot and carbon monoxide, leading not only to millions of deaths each year, but also worsening the world’s greenhouse gas emission problem.

    The team is taking a three-pronged approach to developing this solution, using a combination of participatory design, physical modeling, and experimental validation to create a tool that will lead to the production of high-performing, low-cost energy products.

    Deng and her team in the Deng Energy and Nanotechnology Group use physics-based modeling for the combustion and emission process in cookstoves.

    “My team is focused on computational fluid dynamics. We use computational and numerical studies to understand the flow field where the fuel is burned and releases heat,” says Deng.

    These flow mechanics are crucial to understanding how to minimize heat loss and make cookstoves more efficient, as well as learning how dangerous pollutants are formed and released in the process.

    Using computational methods, Deng’s team performs three-dimensional simulations of the complex chemistry and transport coupling at play in the combustion and emission processes. They then use these simulations to build a combustion model for how fuel is burned and a pollution model that predicts carbon monoxide emissions.

    Deng’s models are used by a group led by Daniel Sweeney in MIT D-Lab to test the experimental validation in prototypes of stoves. Finally, Professor Maria Yang uses participatory design methods to integrate user feedback, ensuring the design tool can actually be used by people across the world.

    The end goal for this collaborative team is to not only provide local manufacturers with a prototype they could produce themselves, but to also provide them with a tool that can tweak the design based on local needs and available materials.

    Deng sees wide-ranging applications for the computational fluid dynamics her team is developing.

    “We see an opportunity to use physics-based modeling, augmented with a machine learning approach, to come up with chemical models for practical fuels that help us better understand combustion. Therefore, we can design new methods to minimize carbon emissions,” she adds.

    While Deng is utilizing simulations and machine learning at the molecular level to improve designs, others are taking a more macro approach.

    Designing intelligent systems

    When it comes to intelligent design, Navid Azizan thinks big. He hopes to help create future intelligent systems that are capable of making decisions autonomously by using the enormous amounts of data emerging from the physical world. From smart robots and autonomous vehicles to smart power grids and smart cities, Azizan focuses on the analysis, design, and control of intelligent systems.

    Achieving such massive feats takes a truly interdisciplinary approach that draws upon various fields such as machine learning, dynamical systems, control, optimization, statistics, and network science, among others.

    “Developing intelligent systems is a multifaceted problem, and it really requires a confluence of disciplines,” says Azizan, assistant professor of mechanical engineering with a dual appointment in MIT’s Institute for Data, Systems, and Society (IDSS). “To create such systems, we need to go beyond standard approaches to machine learning, such as those commonly used in computer vision, and devise algorithms that can enable safe, efficient, real-time decision-making for physical systems.”

    For robot control to work in the complex dynamic environments that arise in the real world, real-time adaptation is key. If, for example, an autonomous vehicle is going to drive in icy conditions or a drone is operating in windy conditions, they need to be able to adapt to their new environment quickly.

    To address this challenge, Azizan and his collaborators at MIT and Stanford University have developed a new algorithm that combines adaptive control, a powerful methodology from control theory, with meta learning, a new machine learning paradigm.

    “This ‘control-oriented’ learning approach outperforms the existing ‘regression-oriented’ methods, which are mostly focused on just fitting the data, by a wide margin,” says Azizan.

    Another critical aspect of deploying machine learning algorithms in physical systems that Azizan and his team hope to address is safety. Deep neural networks are a crucial part of autonomous systems. They are used for interpreting complex visual inputs and making data-driven predictions of future behavior in real time. However, Azizan urges caution.

    “These deep neural networks are only as good as their training data, and their predictions can often be untrustworthy in scenarios not covered by their training data,” he says. Making decisions based on such untrustworthy predictions could lead to fatal accidents in autonomous vehicles or other safety-critical systems.

    To avoid these potentially catastrophic events, Azizan proposes that it is imperative to equip neural networks with a measure of their uncertainty. When the uncertainty is high, they can then be switched to a “safe policy.”

    In pursuit of this goal, Azizan and his collaborators have developed a new algorithm known as SCOD — Sketching Curvature of Out-of-Distribution Detection. This framework could be embedded within any deep neural network to equip them with a measure of their uncertainty.

    “This algorithm is model-agnostic and can be applied to neural networks used in various kinds of autonomous systems, whether it’s drones, vehicles, or robots,” says Azizan.

    Azizan hopes to continue working on algorithms for even larger-scale systems. He and his team are designing efficient algorithms to better control supply and demand in smart energy grids. According to Azizan, even if we create the most efficient solar panels and batteries, we can never achieve a sustainable grid powered by renewable resources without the right control mechanisms.

    Mechanical engineers like Ahmed, Mueller, Deng, and Azizan serve as the key to realizing the next revolution of computing in design.

    “MechE is in a unique position at the intersection of the computational and physical worlds,” Azizan says. “Mechanical engineers build a bridge between theoretical, algorithmic tools and real, physical world applications.”

    Sophisticated computational tools, coupled with the ground truth mechanical engineers have in the physical world, could unlock limitless possibilities for design engineering, well beyond what could have been imagined in those early days of CAD. More