More stories

  • in

    Machine learning, harnessed to extreme computing, aids fusion energy development

    MIT research scientists Pablo Rodriguez-Fernandez and Nathan Howard have just completed one of the most demanding calculations in fusion science — predicting the temperature and density profiles of a magnetically confined plasma via first-principles simulation of plasma turbulence. Solving this problem by brute force is beyond the capabilities of even the most advanced supercomputers. Instead, the researchers used an optimization methodology developed for machine learning to dramatically reduce the CPU time required while maintaining the accuracy of the solution.

    Fusion energyFusion offers the promise of unlimited, carbon-free energy through the same physical process that powers the sun and the stars. It requires heating the fuel to temperatures above 100 million degrees, well above the point where the electrons are stripped from their atoms, creating a form of matter called plasma. On Earth, researchers use strong magnetic fields to isolate and insulate the hot plasma from ordinary matter. The stronger the magnetic field, the better the quality of the insulation that it provides.

    Rodriguez-Fernandez and Howard have focused on predicting the performance expected in the SPARC device, a compact, high-magnetic-field fusion experiment, currently under construction by the MIT spin-out company Commonwealth Fusion Systems (CFS) and researchers from MIT’s Plasma Science and Fusion Center. While the calculation required an extraordinary amount of computer time, over 8 million CPU-hours, what was remarkable was not how much time was used, but how little, given the daunting computational challenge.

    The computational challenge of fusion energyTurbulence, which is the mechanism for most of the heat loss in a confined plasma, is one of the science’s grand challenges and the greatest problem remaining in classical physics. The equations that govern fusion plasmas are well known, but analytic solutions are not possible in the regimes of interest, where nonlinearities are important and solutions encompass an enormous range of spatial and temporal scales. Scientists resort to solving the equations by numerical simulation on computers. It is no accident that fusion researchers have been pioneers in computational physics for the last 50 years.

    One of the fundamental problems for researchers is reliably predicting plasma temperature and density given only the magnetic field configuration and the externally applied input power. In confinement devices like SPARC, the external power and the heat input from the fusion process are lost through turbulence in the plasma. The turbulence itself is driven by the difference in the extremely high temperature of the plasma core and the relatively cool temperatures of the plasma edge (merely a few million degrees). Predicting the performance of a self-heated fusion plasma therefore requires a calculation of the power balance between the fusion power input and the losses due to turbulence.

    These calculations generally start by assuming plasma temperature and density profiles at a particular location, then computing the heat transported locally by turbulence. However, a useful prediction requires a self-consistent calculation of the profiles across the entire plasma, which includes both the heat input and turbulent losses. Directly solving this problem is beyond the capabilities of any existing computer, so researchers have developed an approach that stitches the profiles together from a series of demanding but tractable local calculations. This method works, but since the heat and particle fluxes depend on multiple parameters, the calculations can be very slow to converge.

    However, techniques emerging from the field of machine learning are well suited to optimize just such a calculation. Starting with a set of computationally intensive local calculations run with the full-physics, first-principles CGYRO code (provided by a team from General Atomics led by Jeff Candy) Rodriguez-Fernandez and Howard fit a surrogate mathematical model, which was used to explore and optimize a search within the parameter space. The results of the optimization were compared to the exact calculations at each optimum point, and the system was iterated to a desired level of accuracy. The researchers estimate that the technique reduced the number of runs of the CGYRO code by a factor of four.

    New approach increases confidence in predictionsThis work, described in a recent publication in the journal Nuclear Fusion, is the highest fidelity calculation ever made of the core of a fusion plasma. It refines and confirms predictions made with less demanding models. Professor Jonathan Citrin, of the Eindhoven University of Technology and leader of the fusion modeling group for DIFFER, the Dutch Institute for Fundamental Energy Research, commented: “The work significantly accelerates our capabilities in more routinely performing ultra-high-fidelity tokamak scenario prediction. This algorithm can help provide the ultimate validation test of machine design or scenario optimization carried out with faster, more reduced modeling, greatly increasing our confidence in the outcomes.” 

    In addition to increasing confidence in the fusion performance of the SPARC experiment, this technique provides a roadmap to check and calibrate reduced physics models, which run with a small fraction of the computational power. Such models, cross-checked against the results generated from turbulence simulations, will provide a reliable prediction before each SPARC discharge, helping to guide experimental campaigns and improving the scientific exploitation of the device. It can also be used to tweak and improve even simple data-driven models, which run extremely quickly, allowing researchers to sift through enormous parameter ranges to narrow down possible experiments or possible future machines.

    The research was funded by CFS, with computational support from the National Energy Research Scientific Computing Center, a U.S. Department of Energy Office of Science User Facility. More

  • in

    What choices does the world need to make to keep global warming below 2 C?

    When the 2015 Paris Agreement set a long-term goal of keeping global warming “well below 2 degrees Celsius, compared to pre-industrial levels” to avoid the worst impacts of climate change, it did not specify how its nearly 200 signatory nations could collectively achieve that goal. Each nation was left to its own devices to reduce greenhouse gas emissions in alignment with the 2 C target. Now a new modeling strategy developed at the MIT Joint Program on the Science and Policy of Global Change that explores hundreds of potential future development pathways provides new insights on the energy and technology choices needed for the world to meet that target.

    Described in a study appearing in the journal Earth’s Future, the new strategy combines two well-known computer modeling techniques to scope out the energy and technology choices needed over the coming decades to reduce emissions sufficiently to achieve the Paris goal.

    The first technique, Monte Carlo analysis, quantifies uncertainty levels for dozens of energy and economic indicators including fossil fuel availability, advanced energy technology costs, and population and economic growth; feeds that information into a multi-region, multi-economic-sector model of the world economy that captures the cross-sectoral impacts of energy transitions; and runs that model hundreds of times to estimate the likelihood of different outcomes. The MIT study focuses on projections through the year 2100 of economic growth and emissions for different sectors of the global economy, as well as energy and technology use.

    The second technique, scenario discovery, uses machine learning tools to screen databases of model simulations in order to identify outcomes of interest and their conditions for occurring. The MIT study applies these tools in a unique way by combining them with the Monte Carlo analysis to explore how different outcomes are related to one another (e.g., do low-emission outcomes necessarily involve large shares of renewable electricity?). This approach can also identify individual scenarios, out of the hundreds explored, that result in specific combinations of outcomes of interest (e.g., scenarios with low emissions, high GDP growth, and limited impact on electricity prices), and also provide insight into the conditions needed for that combination of outcomes.

    Using this unique approach, the MIT Joint Program researchers find several possible patterns of energy and technology development under a specified long-term climate target or economic outcome.

    “This approach shows that there are many pathways to a successful energy transition that can be a win-win for the environment and economy,” says Jennifer Morris, an MIT Joint Program research scientist and the study’s lead author. “Toward that end, it can be used to guide decision-makers in government and industry to make sound energy and technology choices and avoid biases in perceptions of what ’needs’ to happen to achieve certain outcomes.”

    For example, while achieving the 2 C goal, the global level of combined wind and solar electricity generation by 2050 could be less than three times or more than 12 times the current level (which is just over 2,000 terawatt hours). These are very different energy pathways, but both can be consistent with the 2 C goal. Similarly, there are many different energy mixes that can be consistent with maintaining high GDP growth in the United States while also achieving the 2 C goal, with different possible roles for renewables, natural gas, carbon capture and storage, and bioenergy. The study finds renewables to be the most robust electricity investment option, with sizable growth projected under each of the long-term temperature targets explored.

    The researchers also find that long-term climate targets have little impact on economic output for most economic sectors through 2050, but do require each sector to significantly accelerate reduction of its greenhouse gas emissions intensity (emissions per unit of economic output) so as to reach near-zero levels by midcentury.

    “Given the range of development pathways that can be consistent with meeting a 2 degrees C goal, policies that target only specific sectors or technologies can unnecessarily narrow the solution space, leading to higher costs,” says former MIT Joint Program Co-Director John Reilly, a co-author of the study. “Our findings suggest that policies designed to encourage a portfolio of technologies and sectoral actions can be a wise strategy that hedges against risks.”

    The research was supported by the U.S. Department of Energy Office of Science. More

  • in

    New England renewables + Canadian hydropower

    The urgent need to cut carbon emissions has prompted a growing number of U.S. states to commit to achieving 100 percent clean electricity by 2040 or 2050. But figuring out how to meet those commitments and still have a reliable and affordable power system is a challenge. Wind and solar installations will form the backbone of a carbon-free power system, but what technologies can meet electricity demand when those intermittent renewable sources are not adequate?

    In general, the options being discussed include nuclear power, natural gas with carbon capture and storage (CCS), and energy storage technologies such as new and improved batteries and chemical storage in the form of hydrogen. But in the northeastern United States, there is one more possibility being proposed: electricity imported from hydropower plants in the neighboring Canadian province of Quebec.

    The proposition makes sense. Those plants can produce as much electricity as about 40 large nuclear power plants, and some power generated in Quebec already comes to the Northeast. So, there could be abundant additional supply to fill any shortfall when New England’s intermittent renewables underproduce. However, U.S. wind and solar investors view Canadian hydropower as a competitor and argue that reliance on foreign supply discourages further U.S. investment.

    Two years ago, three researchers affiliated with the MIT Center for Energy and Environmental Policy Research (CEEPR) — Emil Dimanchev SM ’18, now a PhD candidate at the Norwegian University of Science and Technology; Joshua Hodge, CEEPR’s executive director; and John Parsons, a senior lecturer in the MIT Sloan School of Management — began wondering whether viewing Canadian hydro as another source of electricity might be too narrow. “Hydropower is a more-than-hundred-year-old technology, and plants are already built up north,” says Dimanchev. “We might not need to build something new. We might just need to use those plants differently or to a greater extent.”

    So the researchers decided to examine the potential role and economic value of Quebec’s hydropower resource in a future low-carbon system in New England. Their goal was to help inform policymakers, utility decision-makers, and others about how best to incorporate Canadian hydropower into their plans and to determine how much time and money New England should spend to integrate more hydropower into its system. What they found out was surprising, even to them.

    The analytical methods

    To explore possible roles for Canadian hydropower to play in New England’s power system, the MIT researchers first needed to predict how the regional power system might look in 2050 — both the resources in place and how they would be operated, given any policy constraints. To perform that analysis, they used GenX, a modeling tool originally developed by Jesse Jenkins SM ’14, PhD ’18 and Nestor Sepulveda SM ’16, PhD ’20 while they were researchers at the MIT Energy Initiative (MITEI).

    The GenX model is designed to support decision-making related to power system investment and real-time operation and to examine the impacts of possible policy initiatives on those decisions. Given information on current and future technologies — different kinds of power plants, energy storage technologies, and so on — GenX calculates the combination of equipment and operating conditions that can meet a defined future demand at the lowest cost. The GenX modeling tool can also incorporate specified policy constraints, such as limits on carbon emissions.

    For their study, Dimanchev, Hodge, and Parsons set parameters in the GenX model using data and assumptions derived from a variety of sources to build a representation of the interconnected power systems in New England, New York, and Quebec. (They included New York to account for that state’s existing demand on the Canadian hydro resources.) For data on the available hydropower, they turned to Hydro-Québec, the public utility that owns and operates most of the hydropower plants in Quebec.

    It’s standard in such analyses to include real-world engineering constraints on equipment, such as how quickly certain power plants can be ramped up and down. With help from Hydro-Québec, the researchers also put hour-to-hour operating constraints on the hydropower resource.

    Most of Hydro-Québec’s plants are “reservoir hydropower” systems. In them, when power isn’t needed, the flow on a river is restrained by a dam downstream of a reservoir, and the reservoir fills up. When power is needed, the dam is opened, and the water in the reservoir runs through downstream pipes, turning turbines and generating electricity. Proper management of such a system requires adhering to certain operating constraints. For example, to prevent flooding, reservoirs must not be allowed to overfill — especially prior to spring snowmelt. And generation can’t be increased too quickly because a sudden flood of water could erode the river edges or disrupt fishing or water quality.

    Based on projections from the National Renewable Energy Laboratory and elsewhere, the researchers specified electricity demand for every hour of the year 2050, and the model calculated the cost-optimal mix of technologies and system operating regime that would satisfy that hourly demand, including the dispatch of the Hydro-Québec hydropower system. In addition, the model determined how electricity would be traded among New England, New York, and Quebec.

    Effects of decarbonization limits on technology mix and electricity trading

    To examine the impact of the emissions-reduction mandates in the New England states, the researchers ran the model assuming reductions in carbon emissions between 80 percent and 100 percent relative to 1990 levels. The results of those runs show that, as emissions limits get more stringent, New England uses more wind and solar and extends the lifetime of its existing nuclear plants. To balance the intermittency of the renewables, the region uses natural gas plants, demand-side management, battery storage (modeled as lithium-ion batteries), and trading with Quebec’s hydropower-based system. Meanwhile, the optimal mix in Quebec is mostly composed of existing hydro generation. Some solar is added, but new reservoirs are built only if renewable costs are assumed to be very high.

    The most significant — and perhaps surprising — outcome is that in all the scenarios, the hydropower-based system of Quebec is not only an exporter but also an importer of electricity, with the direction of flow on the Quebec-New England transmission lines changing over time.

    Historically, energy has always flowed from Quebec to New England. The model results for 2018 show electricity flowing from north to south, with the quantity capped by the current transmission capacity limit of 2,225 megawatts (MW).

    An analysis for 2050, assuming that New England decarbonizes 90 percent and the capacity of the transmission lines remains the same, finds electricity flows going both ways. Flows from north to south still dominate. But for nearly 3,500 of the 8,760 hours of the year, electricity flows in the opposite direction — from New England to Quebec. And for more than 2,200 of those hours, the flow going north is at the maximum the transmission lines can carry.

    The direction of flow is motivated by economics. When renewable generation is abundant in New England, prices are low, and it’s cheaper for Quebec to import electricity from New England and conserve water in its reservoirs. Conversely, when New England’s renewables are scarce and prices are high, New England imports hydro-generated electricity from Quebec.

    So rather than delivering electricity, Canadian hydro provides a means of storing the electricity generated by the intermittent renewables in New England.

    “We see this in our modeling because when we tell the model to meet electricity demand using these resources, the model decides that it is cost-optimal to use the reservoirs to store energy rather than anything else,” says Dimanchev. “We should be sending the energy back and forth, so the reservoirs in Quebec are in essence a battery that we use to store some of the electricity produced by our intermittent renewables and discharge it when we need it.”

    Given that outcome, the researchers decided to explore the impact of expanding the transmission capacity between New England and Quebec. Building transmission lines is always contentious, but what would be the impact if it could be done?

    Their model results shows that when transmission capacity is increased from 2,225 MW to 6,225 MW, flows in both directions are greater, and in both cases the flow is at the new maximum for more than 1,000 hours.

    Results of the analysis thus confirm that the economic response to expanded transmission capacity is more two-way trading. To continue the battery analogy, more transmission capacity to and from Quebec effectively increases the rate at which the battery can be charged and discharged.

    Effects of two-way trading on the energy mix

    What impact would the advent of two-way trading have on the mix of energy-generating sources in New England and Quebec in 2050?

    Assuming current transmission capacity, in New England, the change from one-way to two-way trading increases both wind and solar power generation and to a lesser extent nuclear; it also decreases the use of natural gas with CCS. The hydro reservoirs in Canada can provide long-duration storage — over weeks, months, and even seasons — so there is less need for natural gas with CCS to cover any gaps in supply. The level of imports is slightly lower, but now there are also exports. Meanwhile, in Quebec, two-way trading reduces solar power generation, and the use of wind disappears. Exports are roughly the same, but now there are imports as well. Thus, two-way trading reallocates renewables from Quebec to New England, where it’s more economical to install and operate solar and wind systems.

    Another analysis examined the impact on the energy mix of assuming two-way trading plus expanded transmission capacity. For New England, greater transmission capacity allows wind, solar, and nuclear to expand further; natural gas with CCS all but disappears; and both imports and exports increase significantly. In Quebec, solar decreases still further, and both exports and imports of electricity increase.

    Those results assume that the New England power system decarbonizes by 99 percent in 2050 relative to 1990 levels. But at 90 percent and even 80 percent decarbonization levels, the model concludes that natural gas capacity decreases with the addition of new transmission relative to the current transmission scenario. Existing plants are retired, and new plants are not built as they are no longer economically justified. Since natural gas plants are the only source of carbon emissions in the 2050 energy system, the researchers conclude that the greater access to hydro reservoirs made possible by expanded transmission would accelerate the decarbonization of the electricity system.

    Effects of transmission changes on costs

    The researchers also explored how two-way trading with expanded transmission capacity would affect costs in New England and Quebec, assuming 99 percent decarbonization in New England. New England’s savings on fixed costs (investments in new equipment) are largely due to a decreased need to invest in more natural gas with CCS, and its savings on variable costs (operating costs) are due to a reduced need to run those plants. Quebec’s savings on fixed costs come from a reduced need to invest in solar generation. The increase in cost — borne by New England — reflects the construction and operation of the increased transmission capacity. The net benefit for the region is substantial.

    Thus, the analysis shows that everyone wins as transmission capacity increases — and the benefit grows as the decarbonization target tightens. At 99 percent decarbonization, the overall New England-Quebec region pays about $21 per megawatt-hour (MWh) of electricity with today’s transmission capacity but only $18/MWh with expanded transmission. Assuming 100 percent reduction in carbon emissions, the region pays $29/MWh with current transmission capacity and only $22/MWh with expanded transmission.

    Addressing misconceptions

    These results shed light on several misconceptions that policymakers, supporters of renewable energy, and others tend to have.

    The first misconception is that the New England renewables and Canadian hydropower are competitors. The modeling results instead show that they’re complementary. When the power systems in New England and Quebec work together as an integrated system, the Canadian reservoirs are used part of the time to store the renewable electricity. And with more access to hydropower storage in Quebec, there’s generally more renewable investment in New England.

    The second misconception arises when policymakers refer to Canadian hydro as a “baseload resource,” which implies a dependable source of electricity — particularly one that supplies power all the time. “Our study shows that by viewing Canadian hydropower as a baseload source of electricity — or indeed a source of electricity at all — you’re not taking full advantage of what that resource can provide,” says Dimanchev. “What we show is that Quebec’s reservoir hydro can provide storage, specifically for wind and solar. It’s a solution to the intermittency problem that we foresee in carbon-free power systems for 2050.”

    While the MIT analysis focuses on New England and Quebec, the researchers believe that their results may have wider implications. As power systems in many regions expand production of renewables, the value of storage grows. Some hydropower systems have storage capacity that has not yet been fully utilized and could be a good complement to renewable generation. Taking advantage of that capacity can lower the cost of deep decarbonization and help move some regions toward a decarbonized supply of electricity.

    This research was funded by the MIT Center for Energy and Environmental Policy Research, which is supported in part by a consortium of industry and government associates.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Ocean vital signs

    Without the ocean, the climate crisis would be even worse than it is. Each year, the ocean absorbs billions of tons of carbon from the atmosphere, preventing warming that greenhouse gas would otherwise cause. Scientists estimate about 25 to 30 percent of all carbon released into the atmosphere by both human and natural sources is absorbed by the ocean.

    “But there’s a lot of uncertainty in that number,” says Ryan Woosley, a marine chemist and a principal research scientist in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) at MIT. Different parts of the ocean take in different amounts of carbon depending on many factors, such as the season and the amount of mixing from storms. Current models of the carbon cycle don’t adequately capture this variation.

    To close the gap, Woosley and a team of other MIT scientists developed a research proposal for the MIT Climate Grand Challenges competition — an Institute-wide campaign to catalyze and fund innovative research addressing the climate crisis. The team’s proposal, “Ocean Vital Signs,” involves sending a fleet of sailing drones to cruise the oceans taking detailed measurements of how much carbon the ocean is really absorbing. Those data would be used to improve the precision of global carbon cycle models and improve researchers’ ability to verify emissions reductions claimed by countries.

    “If we start to enact mitigation strategies—either through removing CO2 from the atmosphere or reducing emissions — we need to know where CO2 is going in order to know how effective they are,” says Woosley. Without more precise models there’s no way to confirm whether observed carbon reductions were thanks to policy and people, or thanks to the ocean.

    “So that’s the trillion-dollar question,” says Woosley. “If countries are spending all this money to reduce emissions, is it enough to matter?”

    In February, the team’s Climate Grand Challenges proposal was named one of 27 finalists out of the almost 100 entries submitted. From among this list of finalists, MIT will announce in April the selection of five flagship projects to receive further funding and support.

    Woosley is leading the team along with Christopher Hill, a principal research engineer in EAPS. The team includes physical and chemical oceanographers, marine microbiologists, biogeochemists, and experts in computational modeling from across the department, in addition to collaborators from the Media Lab and the departments of Mathematics, Aeronautics and Astronautics, and Electrical Engineering and Computer Science.

    Today, data on the flux of carbon dioxide between the air and the oceans are collected in a piecemeal way. Research ships intermittently cruise out to gather data. Some commercial ships are also fitted with sensors. But these present a limited view of the entire ocean, and include biases. For instance, commercial ships usually avoid storms, which can increase the turnover of water exposed to the atmosphere and cause a substantial increase in the amount of carbon absorbed by the ocean.

    “It’s very difficult for us to get to it and measure that,” says Woosley. “But these drones can.”

    If funded, the team’s project would begin by deploying a few drones in a small area to test the technology. The wind-powered drones — made by a California-based company called Saildrone — would autonomously navigate through an area, collecting data on air-sea carbon dioxide flux continuously with solar-powered sensors. This would then scale up to more than 5,000 drone-days’ worth of observations, spread over five years, and in all five ocean basins.

    Those data would be used to feed neural networks to create more precise maps of how much carbon is absorbed by the oceans, shrinking the uncertainties involved in the models. These models would continue to be verified and improved by new data. “The better the models are, the more we can rely on them,” says Woosley. “But we will always need measurements to verify the models.”

    Improved carbon cycle models are relevant beyond climate warming as well. “CO2 is involved in so much of how the world works,” says Woosley. “We’re made of carbon, and all the other organisms and ecosystems are as well. What does the perturbation to the carbon cycle do to these ecosystems?”

    One of the best understood impacts is ocean acidification. Carbon absorbed by the ocean reacts to form an acid. A more acidic ocean can have dire impacts on marine organisms like coral and oysters, whose calcium carbonate shells and skeletons can dissolve in the lower pH. Since the Industrial Revolution, the ocean has become about 30 percent more acidic on average.

    “So while it’s great for us that the oceans have been taking up the CO2, it’s not great for the oceans,” says Woosley. “Knowing how this uptake affects the health of the ocean is important as well.” More

  • in

    Using nature’s structures in wooden buildings

    Concern about climate change has focused significant attention on the buildings sector, in particular on the extraction and processing of construction materials. The concrete and steel industries together are responsible for as much as 15 percent of global carbon dioxide emissions. In contrast, wood provides a natural form of carbon sequestration, so there’s a move to use timber instead. Indeed, some countries are calling for public buildings to be made at least partly from timber, and large-scale timber buildings have been appearing around the world.

    Observing those trends, Caitlin Mueller ’07, SM ’14, PhD ’14, an associate professor of architecture and of civil and environmental engineering in the Building Technology Program at MIT, sees an opportunity for further sustainability gains. As the timber industry seeks to produce wooden replacements for traditional concrete and steel elements, the focus is on harvesting the straight sections of trees. Irregular sections such as knots and forks are turned into pellets and burned, or ground up to make garden mulch, which will decompose within a few years; both approaches release the carbon trapped in the wood to the atmosphere.

    For the past four years, Mueller and her Digital Structures research group have been developing a strategy for “upcycling” those waste materials by using them in construction — not as cladding or finishes aimed at improving appearance, but as structural components. “The greatest value you can give to a material is to give it a load-bearing role in a structure,” she says. But when builders use virgin materials, those structural components are the most emissions-intensive parts of buildings due to their large volume of high-strength materials. Using upcycled materials in place of those high-carbon systems is therefore especially impactful in reducing emissions.

    Mueller and her team focus on tree forks — that is, spots where the trunk or branch of a tree divides in two, forming a Y-shaped piece. In architectural drawings, there are many similar Y-shaped nodes where straight elements come together. In such cases, those units must be strong enough to support critical loads.

    “Tree forks are naturally engineered structural connections that work as cantilevers in trees, which means that they have the potential to transfer force very efficiently thanks to their internal fiber structure,” says Mueller. “If you take a tree fork and slice it down the middle, you see an unbelievable network of fibers that are intertwining to create these often three-dimensional load transfer points in a tree. We’re starting to do the same thing using 3D printing, but we’re nowhere near what nature does in terms of complex fiber orientation and geometry.”

    She and her team have developed a five-step “design-to-fabrication workflow” that combines natural structures such as tree forks with the digital and computational tools now used in architectural design. While there’s long been a “craft” movement to use natural wood in railings and decorative features, the use of computational tools makes it possible to use wood in structural roles — without excessive cutting, which is costly and may compromise the natural geometry and internal grain structure of the wood.

    Given the wide use of digital tools by today’s architects, Mueller believes that her approach is “at least potentially scalable and potentially achievable within our industrialized materials processing systems.” In addition, by combining tree forks with digital design tools, the novel approach can also support the trend among architects to explore new forms. “Many iconic buildings built in the past two decades have unexpected shapes,” says Mueller. “Tree branches have a very specific geometry that sometimes lends itself to an irregular or nonstandard architectural form — driven not by some arbitrary algorithm but by the material itself.”

    Step 0: Find a source, set goals

    Before starting their design-to-fabrication process, the researchers needed to locate a source of tree forks. Mueller found help in the Urban Forestry Division of the City of Somerville, Massachusetts, which maintains a digital inventory of more than 2,000 street trees — including more than 20 species — and records information about the location, approximate trunk diameter, and condition of each tree.

    With permission from the forestry division, the team was on hand in 2018 when a large group of trees was cut down near the site of the new Somerville High School. Among the heavy equipment on site was a chipper, poised to turn all the waste wood into mulch. Instead, the workers obligingly put the waste wood into the researchers’ truck to be brought to MIT.

    In their project, the MIT team sought not only to upcycle that waste material but also to use it to create a structure that would be valued by the public. “Where I live, the city has had to take down a lot of trees due to damage from an invasive species of beetle,” Mueller explains. “People get really upset — understandably. Trees are an important part of the urban fabric, providing shade and beauty.” She and her team hoped to reduce that animosity by “reinstalling the removed trees in the form of a new functional structure that would recreate the atmosphere and spatial experience previously provided by the felled trees.”

    With their source and goals identified, the researchers were ready to demonstrate the five steps in their design-to-fabrication workflow for making spatial structures using an inventory of tree forks.

    Step 1: Create a digital material library

    The first task was to turn their collection of tree forks into a digital library. They began by cutting off excess material to produce isolated tree forks. They then created a 3D scan of each fork. Mueller notes that as a result of recent progress in photogrammetry (measuring objects using photographs) and 3D scanning, they could create high-resolution digital representations of the individual tree forks with relatively inexpensive equipment, even using apps that run on a typical smartphone.

    In the digital library, each fork is represented by a “skeletonized” version showing three straight bars coming together at a point. The relative geometry and orientation of the branches are of particular interest because they determine the internal fiber orientation that gives the component its strength.

    Step 2: Find the best match between the initial design and the material library

    Like a tree, a typical architectural design is filled with Y-shaped nodes where three straight elements meet up to support a critical load. The goal was therefore to match the tree forks in the material library with the nodes in a sample architectural design.

    First, the researchers developed a “mismatch metric” for quantifying how well the geometries of a particular tree fork aligned with a given design node. “We’re trying to line up the straight elements in the structure with where the branches originally were in the tree,” explains Mueller. “That gives us the optimal orientation for load transfer and maximizes use of the inherent strength of the wood fiber.” The poorer the alignment, the higher the mismatch metric.

    The goal was to get the best overall distribution of all the tree forks among the nodes in the target design. Therefore, the researchers needed to try different fork-to-node distributions and, for each distribution, add up the individual fork-to-node mismatch errors to generate an overall, or global, matching score. The distribution with the best matching score would produce the most structurally efficient use of the total tree fork inventory.

    Since performing that process manually would take far too long to be practical, they turned to the “Hungarian algorithm,” a technique developed in 1955 for solving such problems. “The brilliance of the algorithm is solving that [matching] problem very quickly,” Mueller says. She notes that it’s a very general-use algorithm. “It’s used for things like marriage match-making. It can be used any time you have two collections of things that you’re trying to find unique matches between. So, we definitely didn’t invent the algorithm, but we were the first to identify that it could be used for this problem.”

    The researchers performed repeated tests to show possible distributions of the tree forks in their inventory and found that the matching score improved as the number of forks available in the material library increased — up to a point. In general, the researchers concluded that the mismatch score was lowest, and thus best, when there were about three times as many forks in the material library as there were nodes in the target design.

    Step 3: Balance designer intention with structural performance

    The next step in the process was to incorporate the intention or preference of the designer. To permit that flexibility, each design includes a limited number of critical parameters, such as bar length and bending strain. Using those parameters, the designer can manually change the overall shape, or geometry, of the design or can use an algorithm that automatically changes, or “morphs,” the geometry. And every time the design geometry changes, the Hungarian algorithm recalculates the optimal fork-to-node matching.

    “Because the Hungarian algorithm is extremely fast, all the morphing and the design updating can be really fluid,” notes Mueller. In addition, any change to a new geometry is followed by a structural analysis that checks the deflections, strain energy, and other performance measures of the structure. On occasion, the automatically generated design that yields the best matching score may deviate far from the designer’s initial intention. In such cases, an alternative solution can be found that satisfactorily balances the design intention with a low matching score.

    Step 4: Automatically generate the machine code for fast cutting

    When the structural geometry and distribution of tree forks have been finalized, it’s time to think about actually building the structure. To simplify assembly and maintenance, the researchers prepare the tree forks by recutting their end faces to better match adjoining straight timbers and cutting off any remaining bark to reduce susceptibility to rot and fire.

    To guide that process, they developed a custom algorithm that automatically computes the cuts needed to make a given tree fork fit into its assigned node and to strip off the bark. The goal is to remove as little material as possible but also to avoid a complex, time-consuming machining process. “If we make too few cuts, we’ll cut off too much of the critical structural material. But we don’t want to make a million tiny cuts because it will take forever,” Mueller explains.

    The team uses facilities at the Autodesk Boston Technology Center Build Space, where the robots are far larger than any at MIT and the processing is all automated. To prepare each tree fork, they mount it on a robotic arm that pushes the joint through a traditional band saw in different orientations, guided by computer-generated instructions. The robot also mills all the holes for the structural connections. “That’s helpful because it ensures that everything is aligned the way you expect it to be,” says Mueller.

    Step 5: Assemble the available forks and linear elements to build the structure

    The final step is to assemble the structure. The tree-fork-based joints are all irregular, and combining them with the precut, straight wooden elements could be difficult. However, they’re all labeled. “All the information for the geometry is embedded in the joint, so the assembly process is really low-tech,” says Mueller. “It’s like a child’s toy set. You just follow the instructions on the joints to put all the pieces together.”

    They installed their final structure temporarily on the MIT campus, but Mueller notes that it was only a portion of the structure they plan to eventually build. “It had 12 nodes that we designed and fabricated using our process,” she says, adding that the team’s work was “a little interrupted by the pandemic.” As activity on campus resumes, the researchers plan to finish designing and building the complete structure, which will include about 40 nodes and will be installed as an outdoor pavilion on the site of the felled trees in Somerville.

    In addition, they will continue their research. Plans include working with larger material libraries, some with multibranch forks, and replacing their 3D-scanning technique with computerized tomography scanning technologies that can automatically generate a detailed geometric representation of a tree fork, including its precise fiber orientation and density. And in a parallel project, they’ve been exploring using their process with other sources of materials, with one case study focusing on using material from a demolished wood-framed house to construct more than a dozen geodesic domes.

    To Mueller, the work to date already provides new guidance for the architectural design process. With digital tools, it has become easy for architects to analyze the embodied carbon or future energy use of a design option. “Now we have a new metric of performance: How well am I using available resources?” she says. “With the Hungarian algorithm, we can compute that metric basically in real time, so we can work rapidly and creatively with that as another input to the design process.”

    This research was supported by MIT’s School of Architecture and Planning via the HASS Award.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Preparing global online learners for the clean energy transition

    After a career devoted to making the electric power system more efficient and resilient, Marija Ilic came to MIT in 2018 eager not just to extend her research in new directions, but to prepare a new generation for the challenges of the clean-energy transition.

    To that end, Ilic, a senior research scientist in MIT’s Laboratory for Information and Decisions Systems (LIDS) and a senior staff member at Lincoln Laboratory in the Energy Systems Group, designed an edX course that captures her methods and vision: Principles of Modeling, Simulation, and Control for Electric Energy Systems.

    EdX is a provider of massive open online courses produced in partnership with MIT, Harvard University, and other leading universities. Ilic’s class made its online debut in June 2021, running for 12 weeks, and it is one of an expanding set of online courses funded by the MIT Energy Initiative (MITEI) to provide global learners with a view of the shifting energy landscape.

    Ilic first taught a version of the class while a professor at Carnegie Mellon University, rolled out a second iteration at MIT just as the pandemic struck, and then revamped the class for its current online presentation. But no matter the course location, Ilic focuses on a central theme: “With the need for decarbonization, which will mean accommodating new energy sources such as solar and wind, we must rethink how we operate power systems,” she says. “This class is about how to pose and solve the kinds of problems we will face during this transformation.”

    Hot global topic

    The edX class has been designed to welcome a broad mix of students. In summer 2021, more than 2,000 signed up from 109 countries, ranging from high school students to retirees. In surveys, some said they were drawn to the class by the opportunity to advance their knowledge of modeling. Many others hoped to learn about the move to decarbonize energy systems.

    “The energy transition is a hot topic everywhere in the world, not just in the U.S.,” says teaching assistant Miroslav Kosanic. “In the class, there were veterans of the oil industry and others working in investment and finance jobs related to energy who wanted to understand the potential impacts of changes in energy systems, as well as students from different fields and professors seeking to update their curricula — all gathered into a community.”

    Kosanic, who is currently a PhD student at MIT in electrical engineering and computer science, had taken this class remotely in the spring semester of 2021, while he was still in college in Serbia. “I knew I was interested in power systems, but this course was eye-opening for me, showing how to apply control theory and to model different components of these systems,” he says. “I finished the course and thought, this is just the beginning, and I’d like to learn a lot more.” Kosanic performed so well online that Ilic recruited him to MIT, as a LIDS researcher and edX course teaching assistant, where he grades homework assignments and moderates a lively learner community forum.

    A platform for problem-solving

    The course starts with fundamental concepts in electric power systems operations and management, and it steadily adds layers of complexity, posing real-world problems along the way. Ilic explains how voltage travels from point to point across transmission lines and how grid managers modulate systems to ensure that enough, but not too much, electricity flows. “To deliver power from one location to the next one, operators must constantly make adjustments to ensure that the receiving end can handle the voltage transmitted, optimizing voltage to avoid overheating the wires,” she says.

    In her early lectures, Ilic notes the fundamental constraints of current grid operations, organized around a hierarchy of regional managers dealing with a handful of very large oil, gas, coal, and nuclear power plants, and occupied primarily with the steady delivery of megawatt-hours to far-flung customers. But historically, this top-down structure doesn’t do a good job of preventing loss of energy due to sub-optimal transmission conditions or due to outages related to extreme weather events.

    These issues promise to grow for grid operators as distributed resources such as solar and wind enter the picture, Ilic tells students. In the United States, under new rules dictated by the Federal Energy Regulatory Commission, utilities must begin to integrate the distributed, intermittent electricity produced by wind farms, solar complexes, and even by homes and cars, which flows at voltages much lower than electricity produced by large power plants.

    Finding ways to optimize existing energy systems and to accommodate low- and zero-carbon energy sources requires powerful new modes of analysis and problem-solving. This is where Ilic’s toolbox comes in: a mathematical modeling strategy and companion software that simplifies the input and output of electrical systems, no matter how large or how small. “In the last part of the course, we take up modeling different solutions to electric service in a way that is technology-agnostic, where it only matters how much a black-box energy source produces, and the rates of production and consumption,” says Ilic.

    This black-box modeling approach, which Ilic pioneered in her research, enables students to see, for instance, “what is happening with their own household consumption, and how it affects the larger system,” says Rupamathi Jaddivada PhD ’20, a co-instructor of the edX class and a postdoc in electrical engineering and computer science. “Without getting lost in details of current or voltage, or how different components work, we think about electric energy systems as dynamical components interacting with each other, at different spatial scales.” This means that with just a basic knowledge of physical laws, high school and undergraduate students can take advantage of the course “and get excited about cleaner and more reliable energy,” adds Ilic.

    What Jaddivada and Ilic describe as “zoom in, zoom out” systems thinking leverages the ubiquity of digital communications and the so-called “internet of things.” Energy devices of all scales can link directly to other devices in a network instead of just to a central operations hub, allowing for real-time adjustments in voltage, for instance, vastly improving the potential for optimizing energy flows.

    “In the course, we discuss how information exchange will be key to integrating new end-to-end energy resources and, because of this interactivity, how we can model better ways of controlling entire energy networks,” says Ilic. “It’s a big lesson of the course to show the value of information and software in enabling us to decarbonize the system and build resilience, rather than just building hardware.”

    By the end of the course, students are invited to pursue independent research projects. Some might model the impact of a new energy source on a local grid or investigate different options for reducing energy loss in transmission lines.

    “It would be nice if they see that we don’t have to rely on hardware or large-scale solutions to bring about improved electric service and a clean and resilient grid, but instead on information technologies such as smart components exchanging data in real time, or microgrids in neighborhoods that sustain themselves even when they lose power,” says Ilic. “I hope students walk away convinced that it does make sense to rethink how we operate our basic power systems and that with systematic, physics-based modeling and IT methods we can enable better, more flexible operation in the future.”

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative More

  • in

    3 Questions: Anuradha Annaswamy on building smart infrastructures

    Much of Anuradha Annaswamy’s research hinges on uncertainty. How does cloudy weather affect a grid powered by solar energy? How do we ensure that electricity is delivered to the consumer if a grid is powered by wind and the wind does not blow? What’s the best course of action if a bird hits a plane engine on takeoff? How can you predict the behavior of a cyber attacker?

    A senior research scientist in MIT’s Department of Mechanical Engineering, Annaswamy spends most of her research time dealing with decision-making under uncertainty. Designing smart infrastructures that are resilient to uncertainty can lead to safer, more reliable systems, she says.

    Annaswamy serves as the director of MIT’s Active Adaptive Control Laboratory. A world-leading expert in adaptive control theory, she was named president of the Institute of Electrical and Electronics Engineers Control Systems Society for 2020. Her team uses adaptive control and optimization to account for various uncertainties and anomalies in autonomous systems. In particular, they are developing smart infrastructures in the energy and transportation sectors.

    Using a combination of control theory, cognitive science, economic modeling, and cyber-physical systems, Annaswamy and her team have designed intelligent systems that could someday transform the way we travel and consume energy. Their research includes a diverse range of topics such as safer autopilot systems on airplanes, the efficient dispatch of resources in electrical grids, better ride-sharing services, and price-responsive railway systems.

    In a recent interview, Annaswamy spoke about how these smart systems could help support a safer and more sustainable future.

    Q: How is your team using adaptive control to make air travel safer?

    A: We want to develop an advanced autopilot system that can safely recover the airplane in the event of a severe anomaly — such as the wing becoming damaged mid-flight, or a bird flying into the engine. In the airplane, you have a pilot and autopilot to make decisions. We’re asking: How do you combine those two decision-makers?

    The answer we landed on was developing a shared pilot-autopilot control architecture. We collaborated with David Woods, an expert in cognitive engineering at The Ohio State University, to develop an intelligent system that takes the pilot’s behavior into account. For example, all humans have something known as “capacity for maneuver” and “graceful command degradation” that inform how we react in the face of adversity. Using mathematical models of pilot behavior, we proposed a shared control architecture where the pilot and the autopilot work together to make an intelligent decision on how to react in the face of uncertainties. In this system, the pilot reports the anomaly to an adaptive autopilot system that ensures resilient flight control.

    Q: How does your research on adaptive control fit into the concept of smart cities?

    A: Smart cities are an interesting way we can use intelligent systems to promote sustainability. Our team is looking at ride-sharing services in particular. Services like Uber and Lyft have provided new transportation options, but their impact on the carbon footprint has to be considered. We’re looking at developing a system where the number of passenger-miles per unit of energy is maximized through something called “shared mobility on demand services.” Using the alternating minimization approach, we’ve developed an algorithm that can determine the optimal route for multiple passengers traveling to various destinations.

    As with the pilot-autopilot dynamic, human behavior is at play here. In sociology there is an interesting concept of behavioral dynamics known as Prospect Theory. If we give passengers options with regards to which route their shared ride service will take, we are empowering them with free will to accept or reject a route. Prospect Theory shows that if you can use pricing as an incentive, people are much more loss-averse so they would be willing to walk a bit extra or wait a few minutes longer to join a low-cost ride with an optimized route. If everyone utilized a system like this, the carbon footprint of ride-sharing services could decrease substantially.

    Q: What other ways are you using intelligent systems to promote sustainability?

    A: Renewable energy and sustainability are huge drivers for our research. To enable a world where all of our energy is coming from renewable sources like solar or wind, we need to develop a smart grid that can account for the fact that the sun isn’t always shining and wind isn’t always blowing. These uncertainties are the biggest hurdles to achieving an all-renewable grid. Of course, there are many technologies being developed for batteries that can help store renewable energy, but we are taking a different approach.

    We have created algorithms that can optimally schedule distributed energy resources within the grid — this includes making decisions on when to use onsite generators, how to operate storage devices, and when to call upon demand response technologies, all in response to the economics of using such resources and their physical constraints. If we can develop an interconnected smart grid where, for example, the air conditioning setting in a house is set to 72 degrees instead of 69 degrees automatically when demand is high, there could be a substantial savings in energy usage without impacting human comfort. In one of our studies, we applied a distributed proximal atomic coordination algorithm to the grid in Tokyo to demonstrate how this intelligent system could account for the uncertainties present in a grid powered by renewable resources. More

  • in

    Understanding air pollution from space

    Climate change and air pollution are interlocking crises that threaten human health. Reducing emissions of some air pollutants can help achieve climate goals, and some climate mitigation efforts can in turn improve air quality.

    One part of MIT Professor Arlene Fiore’s research program is to investigate the fundamental science in understanding air pollutants — how long they persist and move through our environment to affect air quality.

    “We need to understand the conditions under which pollutants, such as ozone, form. How much ozone is formed locally and how much is transported long distances?” says Fiore, who notes that Asian air pollution can be transported across the Pacific Ocean to North America. “We need to think about processes spanning local to global dimensions.”

    Fiore, the Peter H. Stone and Paola Malanotte Stone Professor in Earth, Atmospheric and Planetary Sciences, analyzes data from on-the-ground readings and from satellites, along with models, to better understand the chemistry and behavior of air pollutants — which ultimately can inform mitigation strategies and policy setting.

    A global concern

    At the United Nations’ most recent climate change conference, COP26, air quality management was a topic discussed over two days of presentations.

    “Breathing is vital. It’s life. But for the vast majority of people on this planet right now, the air that they breathe is not giving life, but cutting it short,” said Sarah Vogel, senior vice president for health at the Environmental Defense Fund, at the COP26 session.

    “We need to confront this twin challenge now through both a climate and clean air lens, of targeting those pollutants that warm both the air and harm our health.”

    Earlier this year, the World Health Organization (WHO) updated its global air quality guidelines it had issued 15 years earlier for six key pollutants including ozone (O3), nitrogen dioxide (NO2), sulfur dioxide (SO2), and carbon monoxide (CO). The new guidelines are more stringent based on what the WHO stated is the “quality and quantity of evidence” of how these pollutants affect human health. WHO estimates that roughly 7 million premature deaths are attributable to the joint effects of air pollution.

    “We’ve had all these health-motivated reductions of aerosol and ozone precursor emissions. What are the implications for the climate system, both locally but also around the globe? How does air quality respond to climate change? We study these two-way interactions between air pollution and the climate system,” says Fiore.

    But fundamental science is still required to understand how gases, such as ozone and nitrogen dioxide, linger and move throughout the troposphere — the lowermost layer of our atmosphere, containing the air we breathe.

    “We care about ozone in the air we’re breathing where we live at the Earth’s surface,” says Fiore. “Ozone reacts with biological tissue, and can be damaging to plants and human lungs. Even if you’re a healthy adult, if you’re out running hard during an ozone smog event, you might feel an extra weight on your lungs.”

    Telltale signs from space

    Ozone is not emitted directly, but instead forms through chemical reactions catalyzed by radiation from the sun interacting with nitrogen oxides — pollutants released in large part from burning fossil fuels—and volatile organic compounds. However, current satellite instruments cannot sense ground-level ozone.

    “We can’t retrieve surface- or even near-surface ozone from space,” says Fiore of the satellite data, “although the anticipated launch of a new instrument looks promising for new advances in retrieving lower-tropospheric ozone”. Instead, scientists can look at signatures from other gas emissions to get a sense of ozone formation. “Nitrogen dioxide and formaldehyde are a heavy focus of our research because they serve as proxies for two of the key ingredients that go on to form ozone in the atmosphere.”

    To understand ozone formation via these precursor pollutants, scientists have gathered data for more than two decades using spectrometer instruments aboard satellites that measure sunlight in ultraviolet and visible wavelengths that interact with these pollutants in the Earth’s atmosphere — known as solar backscatter radiation.

    Satellites, such as NASA’s Aura, carry instruments like the Ozone Monitoring Instrument (OMI). OMI, along with European-launched satellites such as the Global Ozone Monitoring Experiment (GOME) and the Scanning Imaging Absorption spectroMeter for Atmospheric CartograpHY (SCIAMACHY), and the newest generation TROPOspheric Monitoring instrument (TROPOMI), all orbit the Earth, collecting data during daylight hours when sunlight is interacting with the atmosphere over a particular location.

    In a recent paper from Fiore’s group, former graduate student Xiaomeng Jin (now a postdoc at the University of California at Berkeley), demonstrated that she could bring together and “beat down the noise in the data,” as Fiore says, to identify trends in ozone formation chemistry over several U.S. metropolitan areas that “are consistent with our on-the-ground understanding from in situ ozone measurements.”

    “This finding implies that we can use these records to learn about changes in surface ozone chemistry in places where we lack on-the-ground monitoring,” says Fiore. Extracting these signals by stringing together satellite data — OMI, GOME, and SCIAMACHY — to produce a two-decade record required reconciling the instruments’ differing orbit days, times, and fields of view on the ground, or spatial resolutions. 

    Currently, spectrometer instruments aboard satellites are retrieving data once per day. However, newer instruments, such as the Geostationary Environment Monitoring Spectrometer launched in February 2020 by the National Institute of Environmental Research in the Ministry of Environment of South Korea, will monitor a particular region continuously, providing much more data in real time.

    Over North America, the Tropospheric Emissions: Monitoring of Pollution Search (TEMPO) collaboration between NASA and the Smithsonian Astrophysical Observatory, led by Kelly Chance of Harvard University, will provide not only a stationary view of the atmospheric chemistry over the continent, but also a finer-resolution view — with the instrument recording pollution data from only a few square miles per pixel (with an anticipated launch in 2022).

    “What we’re very excited about is the opportunity to have continuous coverage where we get hourly measurements that allow us to follow pollution from morning rush hour through the course of the day and see how plumes of pollution are evolving in real time,” says Fiore.

    Data for the people

    Providing Earth-observing data to people in addition to scientists — namely environmental managers, city planners, and other government officials — is the goal for the NASA Health and Air Quality Applied Sciences Team (HAQAST).

    Since 2016, Fiore has been part of HAQAST, including collaborative “tiger teams” — projects that bring together scientists, nongovernment entities, and government officials — to bring data to bear on real issues.

    For example, in 2017, Fiore led a tiger team that provided guidance to state air management agencies on how satellite data can be incorporated into state implementation plans (SIPs). “Submission of a SIP is required for any state with a region in non-attainment of U.S. National Ambient Air Quality Standards to demonstrate their approach to achieving compliance with the standard,” says Fiore. “What we found is that small tweaks in, for example, the metrics we use to convey the science findings, can go a long way to making the science more usable, especially when there are detailed policy frameworks in place that must be followed.”

    Now, in 2021, Fiore is part of two tiger teams announced by HAQAST in late September. One team is looking at data to address environmental justice issues, by providing data to assess communities disproportionately affected by environmental health risks. Such information can be used to estimate the benefits of governmental investments in environmental improvements for disproportionately burdened communities. The other team is looking at urban emissions of nitrogen oxides to try to better quantify and communicate uncertainties in the estimates of anthropogenic sources of pollution.

    “For our HAQAST work, we’re looking at not just the estimate of the exposure to air pollutants, or in other words their concentrations,” says Fiore, “but how confident are we in our exposure estimates, which in turn affect our understanding of the public health burden due to exposure. We have stakeholder partners at the New York Department of Health who will pair exposure datasets with health data to help prioritize decisions around public health.

    “I enjoy working with stakeholders who have questions that require science to answer and can make a difference in their decisions.” Fiore says. More