More stories

  • in

    Exploring the nanoworld of biogenic gems

    A new research collaboration with The Bahrain Institute for Pearls and Gemstones (DANAT) will seek to develop advanced characterization tools for the analysis of the properties of pearls and to explore technologies to assign unique identifiers to individual pearls.

    The three-year project will be led by Admir Mašić, associate professor of civil and environmental engineering, in collaboration with Vladimir Bulović, the Fariborz Maseeh Chair in Emerging Technology and professor of electrical engineering and computer science.

    “Pearls are extremely complex and fascinating hierarchically ordered biological materials that are formed by a wide range of different species,” says Mašić. “Working with DANAT provides us a unique opportunity to apply our lab’s multi-scale materials characterization tools to identify potentially species-specific pearl fingerprints, while simultaneously addressing scientific research questions regarding the underlying biomineralization processes that could inform advances in sustainable building materials.”

    DANAT is a gemological laboratory specializing in the testing and study of natural pearls as a reflection of Bahrain’s pearling history and desire to protect and advance Bahrain’s pearling heritage. DANAT’s gemologists support clients and students through pearl, gemstone, and diamond identification services, as well as educational courses.

    Like many other precious gemstones, pearls have been human-made through scientific experimentation, says Noora Jamsheer, chief executive officer at DANAT. Over a century ago, cultured pearls entered markets as a competitive product to natural pearls, similar in appearance but different in value.

    “Gemological labs have been innovating scientific testing methods to differentiate between natural pearls and all other pearls that exist because of direct or indirect human intervention. Today the world knows natural pearls and cultured pearls. However, there are also pearls that fall in between these two categories,” says Jamsheer. “DANAT has the responsibility, as the leading gemological laboratory for pearl testing, to take the initiative necessary to ensure that testing methods keep pace with advances in the science of pearl cultivation.”

    Titled “Exploring the Nanoworld of Biogenic Gems,” the project will aim to improve the process of testing and identifying pearls by identifying morphological, micro-structural, optical, and chemical features sufficient to distinguish a pearl’s area of origin, method of growth, or both. MIT.nano, MIT’s open-access center for nanoscience and nanoengineering will be the organizational home for the project, where Mašić and his team will utilize the facility’s state-of-the-art characterization tools.

    In addition to discovering new methodologies for establishing a pearl’s origin, the project aims to utilize machine learning to automate pearl classification. Furthermore, researchers will investigate techniques to create a unique identifier associated with an individual pearl.

    The initial sponsored research project is expected to last three years, with potential for continued collaboration based on key findings or building upon the project’s success to open new avenues for research into the structure, properties, and growth of pearls. More

  • in

    Low-cost device can measure air pollution anywhere

    Air pollution is a major public health problem: The World Health Organization has estimated that it leads to over 4 million premature deaths worldwide annually. Still, it is not always extensively measured. But now an MIT research team is rolling out an open-source version of a low-cost, mobile pollution detector that could enable people to track air quality more widely.

    The detector, called Flatburn, can be made by 3D printing or by ordering inexpensive parts. The researchers have now tested and calibrated it in relation to existing state-of-the-art machines, and are publicly releasing all the information about it — how to build it, use it, and interpret the data.

    “The goal is for community groups or individual citizens anywhere to be able to measure local air pollution, identify its sources, and, ideally, create feedback loops with officials and stakeholders to create cleaner conditions,” says Carlo Ratti, director of MIT’s Senseable City Lab. 

    “We’ve been doing several pilots around the world, and we have refined a set of prototypes, with hardware, software, and protocols, to make sure the data we collect are robust from an environmental science point of view,” says Simone Mora, a research scientist at Senseable City Lab and co-author of a newly published paper detailing the scanner’s testing process. The Flatburn device is part of a larger project, known as City Scanner, using mobile devices to better understand urban life.

    “Hopefully with the release of the open-source Flatburn we can get grassroots groups, as well as communities in less developed countries, to follow our approach and build and share knowledge,” says An Wang, a researcher at Senseable City Lab and another of the paper’s co-authors.

    The paper, “Leveraging Machine Learning Algorithms to Advance Low-Cost Air Sensor Calibration in Stationary and Mobile Settings,” appears in the journal Atmospheric Environment.

    In addition to Wang, Mora, and Ratti the study’s authors are: Yuki Machida, a former research fellow at Senseable City Lab; Priyanka deSouza, an assistant professor of urban and regional planning at the University of Colorado at Denver; Tiffany Duhl, a researcher with the Massachusetts Department of Environmental Protection and a Tufts University research associate at the time of the project; Neelakshi Hudda, a research assistant professor at Tufts University; John L. Durant, a professor of civil and environmental engineering at Tufts University; and Fabio Duarte, principal research scientist at Senseable City Lab.

    The Flatburn concept at Senseable City Lab dates back to about 2017, when MIT researchers began prototyping a mobile pollution detector, originally to be deployed on garbage trucks in Cambridge, Massachusetts. The detectors are battery-powered and rechargable, either from power sources or a solar panel, with data stored on a card in the device that can be accessed remotely.

    The current extension of that project involved testing the devices in New York City and the Boston area, by seeing how they performed in comparison to already-working pollution detection systems. In New York, the researchers used 5 detectors to collect 1.6 million data points over four weeks in 2021, working with state officials to compare the results. In Boston, the team used mobile sensors, evaluating the Flatburn devices against a state-of-the-art system deployed by Tufts University along with a state agency.

    In both cases, the detectors were set up to measure concentrations of fine particulate matter as well as nitrogen dioxide, over an area of about 10 meters. Fine particular matter refers to tiny particles often associated with burning matter, from power plants, internal combustion engines in autos and fires, and more.

    The research team found that the mobile detectors estimated somewhat lower concentrations of fine particulate matter than the devices already in use, but with a strong enough correlation so that, with adjustments for weather conditions and other factors, the Flatburn devices can produce reliable results.

    “After following their deployment for a few months we can confidently say our low-cost monitors should behave the same way [as standard detectors],” Wang says. “We have a big vision, but we still have to make sure the data we collect is valid and can be used for regulatory and policy purposes,”

    Duarte adds: “If you follow these procedures with low-cost sensors you can still acquire good enough data to go back to [environmental] agencies with it, and say, ‘Let’s talk.’”

    The researchers did find that using the units in a mobile setting — on top of automobiles — means they will currently have an operating life of six months. They also identified a series of potential issues that people will have to deal with when using the Flatburn detectors generally. These include what the research team calls “drift,” the gradual changing of the detector’s readings over time, as well as “aging,” the more fundamental deterioration in a unit’s physical condition.

    Still, the researchers believe the units will function well, and they are providing complete instructions in their release of Flatburn as an open-source tool. That even includes guidance for working with officials, communities, and stakeholders to process the results and attempt to shape action.

    “It’s very important to engage with communities, to allow them to reflect on sources of pollution,” says Mora. 

    “The original idea of the project was to democratize environmental data, and that’s still the goal,” Duarte adds. “We want people to have the skills to analyze the data and engage with communities and officials.” More

  • in

    Minimizing electric vehicles’ impact on the grid

    National and global plans to combat climate change include increasing the electrification of vehicles and the percentage of electricity generated from renewable sources. But some projections show that these trends might require costly new power plants to meet peak loads in the evening when cars are plugged in after the workday. What’s more, overproduction of power from solar farms during the daytime can waste valuable electricity-generation capacity.

    In a new study, MIT researchers have found that it’s possible to mitigate or eliminate both these problems without the need for advanced technological systems of connected devices and real-time communications, which could add to costs and energy consumption. Instead, encouraging the placing of charging stations for electric vehicles (EVs) in strategic ways, rather than letting them spring up anywhere, and setting up systems to initiate car charging at delayed times could potentially make all the difference.

    The study, published today in the journal Cell Reports Physical Science, is by Zachary Needell PhD ’22, postdoc Wei Wei, and Professor Jessika Trancik of MIT’s Institute for Data, Systems, and Society.

    In their analysis, the researchers used data collected in two sample cities: New York and Dallas. The data were gathered from, among other sources, anonymized records collected via onboard devices in vehicles, and surveys that carefully sampled populations to cover variable travel behaviors. They showed the times of day cars are used and for how long, and how much time the vehicles spend at different kinds of locations — residential, workplace, shopping, entertainment, and so on.

    The findings, Trancik says, “round out the picture on the question of where to strategically locate chargers to support EV adoption and also support the power grid.”

    Better availability of charging stations at workplaces, for example, could help to soak up peak power being produced at midday from solar power installations, which might otherwise go to waste because it is not economical to build enough battery or other storage capacity to save all of it for later in the day. Thus, workplace chargers can provide a double benefit, helping to reduce the evening peak load from EV charging and also making use of the solar electricity output.

    These effects on the electric power system are considerable, especially if the system must meet charging demands for a fully electrified personal vehicle fleet alongside the peaks in other demand for electricity, for example on the hottest days of the year. If unmitigated, the evening peaks in EV charging demand could require installing upwards of 20 percent more power-generation capacity, the researchers say.

    “Slow workplace charging can be more preferable than faster charging technologies for enabling a higher utilization of midday solar resources,” Wei says.

    Meanwhile, with delayed home charging, each EV charger could be accompanied by a simple app to estimate the time to begin its charging cycle so that it charges just before it is needed the next day. Unlike other proposals that require a centralized control of the charging cycle, such a system needs no interdevice communication of information and can be preprogrammed — and can accomplish a major shift in the demand on the grid caused by increasing EV penetration. The reason it works so well, Trancik says, is because of the natural variability in driving behaviors across individuals in a population.

    By “home charging,” the researchers aren’t only referring to charging equipment in individual garages or parking areas. They say it’s essential to make charging stations available in on-street parking locations and in apartment building parking areas as well.

    Trancik says the findings highlight the value of combining the two measures — workplace charging and delayed home charging — to reduce peak electricity demand, store solar energy, and conveniently meet drivers’ charging needs on all days. As the team showed in earlier research, home charging can be a particularly effective component of a strategic package of charging locations; workplace charging, they have found, is not a good substitute for home charging for meeting drivers’ needs on all days.

    “Given that there’s a lot of public money going into expanding charging infrastructure,” Trancik says, “how do you incentivize the location such that this is going to be efficiently and effectively integrated into the power grid without requiring a lot of additional capacity expansion?” This research offers some guidance to policymakers on where to focus rules and incentives.

    “I think one of the fascinating things about these findings is that by being strategic you can avoid a lot of physical infrastructure that you would otherwise need,” she adds. “Your electric vehicles can displace some of the need for stationary energy storage, and you can also avoid the need to expand the capacity of power plants, by thinking about the location of chargers as a tool for managing demands — where they occur and when they occur.”

    Delayed home charging could make a surprising amount of difference, the team found. “It’s basically incentivizing people to begin charging later. This can be something that is preprogrammed into your chargers. You incentivize people to delay the onset of charging by a bit, so that not everyone is charging at the same time, and that smooths out the peak.”

    Such a program would require some advance commitment on the part of participants. “You would need to have enough people committing to this program in advance to avoid the investment in physical infrastructure,” Trancik says. “So, if you have enough people signing up, then you essentially don’t have to build those extra power plants.”

    It’s not a given that all of this would line up just right, and putting in place the right mix of incentives would be crucial. “If you want electric vehicles to act as an effective storage technology for solar energy, then the [EV] market needs to grow fast enough in order to be able to do that,” Trancik says.

    To best use public funds to help make that happen, she says, “you can incentivize charging installations, which would go through ideally a competitive process — in the private sector, you would have companies bidding for different projects, but you can incentivize installing charging at workplaces, for example, to tap into both of these benefits.” Chargers people can access when they are parked near their residences are also important, Trancik adds, but for other reasons. Home charging is one of the ways to meet charging needs while avoiding inconvenient disruptions to people’s travel activities.

    The study was supported by the European Regional Development Fund Operational Program for Competitiveness and Internationalization, the Lisbon Portugal Regional Operation Program, and the Portuguese Foundation for Science and Technology. More

  • in

    Study: Smoke particles from wildfires can erode the ozone layer

    A wildfire can pump smoke up into the stratosphere, where the particles drift for over a year. A new MIT study has found that while suspended there, these particles can trigger chemical reactions that erode the protective ozone layer shielding the Earth from the sun’s damaging ultraviolet radiation.

    The study, which appears today in Nature, focuses on the smoke from the “Black Summer” megafire in eastern Australia, which burned from December 2019 into January 2020. The fires — the country’s most devastating on record — scorched tens of millions of acres and pumped more than 1 million tons of smoke into the atmosphere.

    The MIT team identified a new chemical reaction by which smoke particles from the Australian wildfires made ozone depletion worse. By triggering this reaction, the fires likely contributed to a 3-5 percent depletion of total ozone at mid-latitudes in the Southern Hemisphere, in regions overlying Australia, New Zealand, and parts of Africa and South America.

    The researchers’ model also indicates the fires had an effect in the polar regions, eating away at the edges of the ozone hole over Antarctica. By late 2020, smoke particles from the Australian wildfires widened the Antarctic ozone hole by 2.5 million square kilometers — 10 percent of its area compared to the previous year.

    It’s unclear what long-term effect wildfires will have on ozone recovery. The United Nations recently reported that the ozone hole, and ozone depletion around the world, is on a recovery track, thanks to a sustained international effort to phase out ozone-depleting chemicals. But the MIT study suggests that as long as these chemicals persist in the atmosphere, large fires could spark a reaction that temporarily depletes ozone.

    “The Australian fires of 2020 were really a wake-up call for the science community,” says Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at MIT and a leading climate scientist who first identified the chemicals responsible for the Antarctic ozone hole. “The effect of wildfires was not previously accounted for in [projections of] ozone recovery. And I think that effect may depend on whether fires become more frequent and intense as the planet warms.”

    The study is led by Solomon and MIT research scientist Kane Stone, along with collaborators from the Institute for Environmental and Climate Research in Guangzhou, China; the U.S. National Oceanic and Atmospheric Administration; the U.S. National Center for Atmospheric Research; and Colorado State University.

    Chlorine cascade

    The new study expands on a 2022 discovery by Solomon and her colleagues, in which they first identified a chemical link between wildfires and ozone depletion. The researchers found that chlorine-containing compounds, originally emitted by factories in the form of chlorofluorocarbons (CFCs), could react with the surface of fire aerosols. This interaction, they found, set off a chemical cascade that produced chlorine monoxide — the ultimate ozone-depleting molecule. Their results showed that the Australian wildfires likely depleted ozone through this newly identified chemical reaction.

    “But that didn’t explain all the changes that were observed in the stratosphere,” Solomon says. “There was a whole bunch of chlorine-related chemistry that was totally out of whack.”

    In the new study, the team took a closer look at the composition of molecules in the stratosphere following the Australian wildfires. They combed through three independent sets of satellite data and observed that in the months following the fires, concentrations of hydrochloric acid dropped significantly at mid-latitudes, while chlorine monoxide spiked.

    Hydrochloric acid (HCl) is present in the stratosphere as CFCs break down naturally over time. As long as chlorine is bound in the form of HCl, it doesn’t have a chance to destroy ozone. But if HCl breaks apart, chlorine can react with oxygen to form ozone-depleting chlorine monoxide.

    In the polar regions, HCl can break apart when it interacts with the surface of cloud particles at frigid temperatures of about 155 kelvins. However, this reaction was not expected to occur at mid-latitudes, where temperatures are much warmer.

    “The fact that HCl at mid-latitudes dropped by this unprecedented amount was to me kind of a danger signal,” Solomon says.

    She wondered: What if HCl could also interact with smoke particles, at warmer temperatures and in a way that released chlorine to destroy ozone? If such a reaction was possible, it would explain the imbalance of molecules and much of the ozone depletion observed following the Australian wildfires.

    Smoky drift

    Solomon and her colleagues dug through the chemical literature to see what sort of organic molecules could react with HCl at warmer temperatures to break it apart.

    “Lo and behold, I learned that HCl is extremely soluble in a whole broad range of organic species,” Solomon says. “It likes to glom on to lots of compounds.”

    The question then, was whether the Australian wildfires released any of those compounds that could have triggered HCl’s breakup and any subsequent depletion of ozone. When the team looked at the composition of smoke particles in the first days after the fires, the picture was anything but clear.

    “I looked at that stuff and threw up my hands and thought, there’s so much stuff in there, how am I ever going to figure this out?” Solomon recalls. “But then I realized it had actually taken some weeks before you saw the HCl drop, so you really need to look at the data on aged wildfire particles.”

    When the team expanded their search, they found that smoke particles persisted over months, circulating in the stratosphere at mid-latitudes, in the same regions and times when concentrations of HCl dropped.

    “It’s the aged smoke particles that really take up a lot of the HCl,” Solomon says. “And then you get, amazingly, the same reactions that you get in the ozone hole, but over mid-latitudes, at much warmer temperatures.”

    When the team incorporated this new chemical reaction into a model of atmospheric chemistry, and simulated the conditions of the Australian wildfires, they observed a 5 percent depletion of ozone throughout the stratosphere at mid-latitudes, and a 10 percent widening of the ozone hole over Antarctica.

    The reaction with HCl is likely the main pathway by which wildfires can deplete ozone. But Solomon guesses there may be other chlorine-containing compounds drifting in the stratosphere, that wildfires could unlock.

    “There’s now sort of a race against time,” Solomon says. “Hopefully, chlorine-containing compounds will have been destroyed, before the frequency of fires increases with climate change. This is all the more reason to be vigilant about global warming and these chlorine-containing compounds.”

    This research was supported, in part, by NASA and the U.S. National Science Foundation. More

  • in

    Nanotube sensors are capable of detecting and distinguishing gibberellin plant hormones

    Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group of the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, and their collaborators from Temasek Life Sciences Laboratory have developed the first-ever nanosensor that can detect and distinguish gibberellins (GAs), a class of hormones in plants that are important for growth. The novel nanosensors are nondestructive, unlike conventional collection methods, and have been successfully tested in living plants. Applied in the field for early-stage plant stress monitoring, the sensors could prove transformative for agriculture and plant biotechnology, giving farmers interested in high-tech precision agriculture and crop management a valuable tool to optimize yield.

    The researchers designed near-infrared fluorescent carbon nanotube sensors that are capable of detecting and distinguishing two plant hormones, GA3 and GA4. Belonging to a class of plant hormones known as gibberellins, GA3 and GA4 are diterpenoid phytohormones produced by plants that play an important role in modulating diverse processes involved in plant growth and development. GAs are thought to have played a role in the driving forces behind the “green revolution” of the 1960s, which was in turn credited with averting famine and saving the lives of many worldwide. The continued study of gibberellins could lead to further breakthroughs in agricultural science and have implications for food security.

    Climate change, global warming, and rising sea levels cause farming soil to get contaminated by saltwater, raising soil salinity. In turn, high soil salinity is known to negatively regulate GA biosynthesis and promote GA metabolism, resulting in the reduction of GA content in plants. The new nanosensors developed by the SMART researchers allow for the study of GA dynamics in living plants under salinity stress at a very early stage, potentially enabling farmers to make early interventions when eventually applied in the field. This forms the basis of early-stage stress detection.

    Currently, methods to detect GA3 and GA4 typically require mass spectroscopy-based analysis, a time-consuming and destructive process. In contrast, the new sensors developed by the researchers are highly selective for the respective GAs and offer real-time, in vivo monitoring of changes in GA levels across a broad range of plant species.

    Described in a paper titled “Near-Infrared Fluorescent Carbon Nanotube Sensors for the Plant Hormone Family Gibberellins” published in the journal Nano Letters, the research represents a breakthrough for early-stage plant stress detection and holds tremendous potential to advance plant biotechnology and agriculture. This paper builds on previous research by the team at SMART DiSTAP on single-walled carbon nanotube-based nanosensors using the corona phase molecular recognition (CoPhMoRe) platform.

    Based on the CoPhMoRe concept introduced by the lab of MIT Professor Professor Michael Strano, the novel sensors are able to detect GA kinetics in the roots of a variety of model and non-model plant species, including Arabidopsis, lettuce, and basil, as well as GA accumulation during lateral root emergence, highlighting the importance of GA in root system architecture. This was made possible by the researchers’ related development of a new coupled Raman/near infrared fluorimeter that enables self-referencing of nanosensor near infrared fluorescence with its Raman G-band, a new hardware innovation that removes the need for a separate reference nanosensor and greatly simplifies the instrumentation requirements by using a single optical channel to measure hormone concentration.

    Using the reversible GA nanosensors, the researchers detected increased endogenous GA levels in mutant plants producing greater amounts of GA20ox1, a key enzyme in GA biosynthesis, as well as decreased GA levels in plants under salinity stress. When exposed to salinity stress, researchers also found that lettuce growth was severely stunted — an indication that only became apparent after 10 days. In contrast, the GA nanosensors reported decreased GA levels after just six hours, demonstrating their efficacy as a much earlier indicator of salinity stress.

    “Our CoPhMoRe technique allows us to create nanoparticles that act like natural antibodies in that they can recognize and lock onto specific molecules. But they tend to be far more stable than alternatives. We have used this method to successfully create nanosensors for plant signals such as hydrogen peroxide and heavy-metal pollutants like arsenic in plants and soil,” says Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT who is co-corresponding author and DiSTAP co-lead principal investigator. “The method works to create sensors for organic molecules like synthetic auxin — an important plant hormone — as we have shown. This latest breakthrough now extends this success to a plant hormone family called gibberellins — an exceedingly difficult one to recognize.”

    Strano adds: “The resulting technology offers a rapid, real-time, and in vivo method to monitor changes in GA levels in virtually any plant, and can replace current sensing methods which are laborious, destructive, species-specific, and much less efficient.”

    Mervin Chun-Yi Ang, associate scientific director at DiSTAP and co-first author of the paper, says, “More than simply a breakthrough in plant stress detection, we have also demonstrated a hardware innovation in the form of a new coupled Raman/NIR fluorimeter that enabled self-referencing of SWNT sensor fluorescence with its Raman G-band, representing a major advance in the translation of our nanosensing tool sets to the field. In the near future, our sensors can be combined with low-cost electronics, portable optodes, or microneedle interfaces for industrial use, transforming how the industry screens for and mitigates plant stress in food crops and potentially improving growth and yield.”

    The new sensors could yet have a variety of industrial applications and use cases. Daisuke Urano, a Temasek Life Sciences Laboratory principal investigator, National University of Singapore (NUS) adjunct assistant professor, and co-corresponding author of the paper, explains, “GAs are known to regulate a wide range of plant development processes, from shoot, root, and flower development, to seed germination and plant stress responses. With the commercialization of GAs, these plant hormones are also sold to growers and farmers as plant growth regulators to promote plant growth and seed germination. Our novel GA nanosensors could be applied in the field for early-stage plant stress monitoring, and also be used by growers and farmers to track the uptake or metabolism of GA in their crops.”

    The design and development of the nanosensors, creation and validation of the coupled Raman/near infrared fluorimeter and related image/data processing algorithms, as well as statistical analysis of readouts from plant sensors for this study were performed by SMART and MIT. The Temasek Life Sciences Laboratory was responsible for the design, execution, and analysis of plant-related studies, including validation of nanosensors in living plants.

    This research was carried out by SMART and supported by the National Research Foundation of Singapore under its Campus for Research Excellence And Technological Enterprise (CREATE) program. The DiSTAP program, led by Strano and Singapore co-lead principal investigator Professor Chua Nam Hai, addresses deep problems in food production in Singapore and the world by developing a suite of impactful and novel analytical, genetic, and biomaterial technologies. The goal is to fundamentally change how plant biosynthetic pathways are discovered, monitored, engineered, and ultimately translated to meet the global demand for food and nutrients. Scientists from MIT, Temasek Life Sciences Laboratory, Nanyang Technological University (NTU) and NUS are collaboratively developing new tools for the continuous measurement of important plant metabolites and hormones for novel discovery, deeper understanding and control of plant biosynthetic pathways in ways not yet possible, especially in the context of green leafy vegetables; leveraging these new techniques to engineer plants with highly desirable properties for global food security, including high yield density production, and drought and pathogen resistance, and applying these technologies to improve urban farming.

    SMART was established by MIT and the National Research Foundation of Singapore in 2007. SMART serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research projects in areas of interest to both Singapore and MIT. SMART currently comprises an Innovation Center and five interdisciplinary research groups: Antimicrobial Resistance, Critical Analytics for Manufacturing Personalized-Medicine, DiSTAP, Future Urban Mobility, and Low Energy Electronic Systems. More

  • in

    Aviva Intveld named 2023 Gates Cambridge Scholar

    MIT senior Aviva Intveld has won the prestigious Gates Cambridge Scholarship, which offers students an opportunity to pursue graduate study in the field of their choice at Cambridge University in the U.K. Intveld will join the other 23 U.S. citizens selected for the 2023 class of scholars.

    Intveld, from Los Angeles, is majoring in earth, atmospheric, and planetary sciences, and minoring in materials science and engineering with concentrations in geology, geochemistry, and archaeology. Her research interests span the intersections among those fields to better understand how the natural environments of the past have shaped human movement and decision-making.

    At Cambridge, Intveld will undertake a research MPhil in earth sciences at the Godwin Lab for Paleoclimate Research, where she will investigate the impact of past climate on the ancient Maya in northwest Yucatán via cave sediment records. She hopes to pursue an impact-oriented research career in paleoclimate and paleoenvironment reconstruction and ultimately apply the lessons learned from her research to inform modern climate policy. She is particularly passionate about sustainable mining of energy-critical elements and addressing climate change inequality in her home state of California.

    Intveld’s work at Cambridge will build upon her extensive research experience at MIT. She currently works in the McGee Lab reconstructing the Late Pleistocene-Early Holocene paleoclimate of northeastern Mexico to provide a climatic background to the first peopling of the Americas. Previously, she explored the influence of mountain plate tectonics on biodiversity in the Perron Lab. During a summer research position at the University of Haifa in Israel she analyzed the microfossil assemblage of an offshore sediment core for paleo-coastal reconstruction.

    Last summer, Intveld interned at the National Oceanic and Atmospheric Administration in Homer, Alaska, to identify geologic controls on regional groundwater chemistry. She has also interned with the World Wildlife Fund and with the Natural History Museum of Los Angeles. During her the spring semester of her junior year, Intveld studied abroad through MISTI at Imperial College London’s Royal School of Mines and completed geology field work in Sardinia, Italy.

    Intveld has been a strong presence on MIT’s campus, serving as the undergraduate representative on the EAPS Diversity, Equity, and Inclusion Committee. She leads tours for the MIT List Visual Arts Center, is a member of and associate advisor for the Terrascope Learning Community, and is a participant in the Addir Interfaith Dialogue Fellowship.

    Inveld was advised in her application by Kim Benard, associate dean of the Distinguished Fellowships team in Career Advising and Professional Development, who says, “Aviva’s work is at a fascinating crossroads of archeology, geology, and sustainability. She has already done extraordinary work, and this opportunity will prepare her even more to be influential in the fight for climate mitigation.”

    Established by the Bill and Melinda Gates Foundation in 2000, the Gates Cambridge Scholarship provides full funding for talented students from outside the United Kingdom to pursue postgraduate study in any subject at Cambridge University. Since the program’s inception in 2001, there have been 33 Gates Cambridge Scholars from MIT. More

  • in

    Integrating humans with AI in structural design

    Modern fabrication tools such as 3D printers can make structural materials in shapes that would have been difficult or impossible using conventional tools. Meanwhile, new generative design systems can take great advantage of this flexibility to create innovative designs for parts of a new building, car, or virtually any other device.

    But such “black box” automated systems often fall short of producing designs that are fully optimized for their purpose, such as providing the greatest strength in proportion to weight or minimizing the amount of material needed to support a given load. Fully manual design, on the other hand, is time-consuming and labor-intensive.

    Now, researchers at MIT have found a way to achieve some of the best of both of these approaches. They used an automated design system but stopped the process periodically to allow human engineers to evaluate the work in progress and make tweaks or adjustments before letting the computer resume its design process. Introducing a few of these iterations produced results that performed better than those designed by the automated system alone, and the process was completed more quickly compared to the fully manual approach.

    The results are reported this week in the journal Structural and Multidisciplinary Optimization, in a paper by MIT doctoral student Dat Ha and assistant professor of civil and environmental engineering Josephine Carstensen.

    The basic approach can be applied to a broad range of scales and applications, Carstensen explains, for the design of everything from biomedical devices to nanoscale materials to structural support members of a skyscraper. Already, automated design systems have found many applications. “If we can make things in a better way, if we can make whatever we want, why not make it better?” she asks.

    “It’s a way to take advantage of how we can make things in much more complex ways than we could in the past,” says Ha, adding that automated design systems have already begun to be widely used over the last decade in automotive and aerospace industries, where reducing weight while maintaining structural strength is a key need.

    “You can take a lot of weight out of components, and in these two industries, everything is driven by weight,” he says. In some cases, such as internal components that aren’t visible, appearance is irrelevant, but for other structures aesthetics may be important as well. The new system makes it possible to optimize designs for visual as well as mechanical properties, and in such decisions the human touch is essential.

    As a demonstration of their process in action, the researchers designed a number of structural load-bearing beams, such as might be used in a building or a bridge. In their iterations, they saw that the design has an area that could fail prematurely, so they selected that feature and required the program to address it. The computer system then revised the design accordingly, removing the highlighted strut and strengthening some other struts to compensate, and leading to an improved final design.

    The process, which they call Human-Informed Topology Optimization, begins by setting out the needed specifications — for example, a beam needs to be this length, supported on two points at its ends, and must support this much of a load. “As we’re seeing the structure evolve on the computer screen in response to initial specification,” Carstensen says, “we interrupt the design and ask the user to judge it. The user can select, say, ‘I’m not a fan of this region, I’d like you to beef up or beef down this feature size requirement.’ And then the algorithm takes into account the user input.”

    While the result is not as ideal as what might be produced by a fully rigorous yet significantly slower design algorithm that considers the underlying physics, she says it can be much better than a result generated by a rapid automated design system alone. “You don’t get something that’s quite as good, but that was not necessarily the goal. What we can show is that instead of using several hours to get something, we can use 10 minutes and get something much better than where we started off.”

    The system can be used to optimize a design based on any desired properties, not just strength and weight. For example, it can be used to minimize fracture or buckling, or to reduce stresses in the material by softening corners.

    Carstensen says, “We’re not looking to replace the seven-hour solution. If you have all the time and all the resources in the world, obviously you can run these and it’s going to give you the best solution.” But for many situations, such as designing replacement parts for equipment in a war zone or a disaster-relief area with limited computational power available, “then this kind of solution that catered directly to your needs would prevail.”

    Similarly, for smaller companies manufacturing equipment in essentially “mom and pop” businesses, such a simplified system might be just the ticket. The new system they developed is not only simple and efficient to run on smaller computers, but it also requires far less training to produce useful results, Carstensen says. A basic two-dimensional version of the software, suitable for designing basic beams and structural parts, is freely available now online, she says, as the team continues to develop a full 3D version.

    “The potential applications of Prof Carstensen’s research and tools are quite extraordinary,” says Christian Málaga-Chuquitaype, a professor of civil and environmental engineering at Imperial College London, who was not associated with this work. “With this work, her group is paving the way toward a truly synergistic human-machine design interaction.”

    “By integrating engineering ‘intuition’ (or engineering ‘judgement’) into a rigorous yet computationally efficient topology optimization process, the human engineer is offered the possibility of guiding the creation of optimal structural configurations in a way that was not available to us before,” he adds. “Her findings have the potential to change the way engineers tackle ‘day-to-day’ design tasks.” More

  • in

    Improving health outcomes by targeting climate and air pollution simultaneously

    Climate policies are typically designed to reduce greenhouse gas emissions that result from human activities and drive climate change. The largest source of these emissions is the combustion of fossil fuels, which increases atmospheric concentrations of ozone, fine particulate matter (PM2.5) and other air pollutants that pose public health risks. While climate policies may result in lower concentrations of health-damaging air pollutants as a “co-benefit” of reducing greenhouse gas emissions-intensive activities, they are most effective at improving health outcomes when deployed in tandem with geographically targeted air-quality regulations.

    Yet the computer models typically used to assess the likely air quality/health impacts of proposed climate/air-quality policy combinations come with drawbacks for decision-makers. Atmospheric chemistry/climate models can produce high-resolution results, but they are expensive and time-consuming to run. Integrated assessment models can produce results for far less time and money, but produce results at global and regional scales, rendering them insufficiently precise to obtain accurate assessments of air quality/health impacts at the subnational level.

    To overcome these drawbacks, a team of researchers at MIT and the University of California at Davis has developed a climate/air-quality policy assessment tool that is both computationally efficient and location-specific. Described in a new study in the journal ACS Environmental Au, the tool could enable users to obtain rapid estimates of combined policy impacts on air quality/health at more than 1,500 locations around the globe — estimates precise enough to reveal the equity implications of proposed policy combinations within a particular region.

    “The modeling approach described in this study may ultimately allow decision-makers to assess the efficacy of multiple combinations of climate and air-quality policies in reducing the health impacts of air pollution, and to design more effective policies,” says Sebastian Eastham, the study’s lead author and a principal research scientist at the MIT Joint Program on the Science and Policy of Global Change. “It may also be used to determine if a given policy combination would result in equitable health outcomes across a geographical area of interest.”

    To demonstrate the efficiency and accuracy of their policy assessment tool, the researchers showed that outcomes projected by the tool within seconds were consistent with region-specific results from detailed chemistry/climate models that took days or even months to run. While continuing to refine and develop their approaches, they are now working to embed the new tool into integrated assessment models for direct use by policymakers.

    “As decision-makers implement climate policies in the context of other sustainability challenges like air pollution, efficient modeling tools are important for assessment — and new computational techniques allow us to build faster and more accurate tools to provide credible, relevant information to a broader range of users,” says Noelle Selin, a professor at MIT’s Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences, and supervising author of the study. “We are looking forward to further developing such approaches, and to working with stakeholders to ensure that they provide timely, targeted and useful assessments.”

    The study was funded, in part, by the U.S. Environmental Protection Agency and the Biogen Foundation. More