More stories

  • in

    Integrating humans with AI in structural design

    Modern fabrication tools such as 3D printers can make structural materials in shapes that would have been difficult or impossible using conventional tools. Meanwhile, new generative design systems can take great advantage of this flexibility to create innovative designs for parts of a new building, car, or virtually any other device.

    But such “black box” automated systems often fall short of producing designs that are fully optimized for their purpose, such as providing the greatest strength in proportion to weight or minimizing the amount of material needed to support a given load. Fully manual design, on the other hand, is time-consuming and labor-intensive.

    Now, researchers at MIT have found a way to achieve some of the best of both of these approaches. They used an automated design system but stopped the process periodically to allow human engineers to evaluate the work in progress and make tweaks or adjustments before letting the computer resume its design process. Introducing a few of these iterations produced results that performed better than those designed by the automated system alone, and the process was completed more quickly compared to the fully manual approach.

    The results are reported this week in the journal Structural and Multidisciplinary Optimization, in a paper by MIT doctoral student Dat Ha and assistant professor of civil and environmental engineering Josephine Carstensen.

    The basic approach can be applied to a broad range of scales and applications, Carstensen explains, for the design of everything from biomedical devices to nanoscale materials to structural support members of a skyscraper. Already, automated design systems have found many applications. “If we can make things in a better way, if we can make whatever we want, why not make it better?” she asks.

    “It’s a way to take advantage of how we can make things in much more complex ways than we could in the past,” says Ha, adding that automated design systems have already begun to be widely used over the last decade in automotive and aerospace industries, where reducing weight while maintaining structural strength is a key need.

    “You can take a lot of weight out of components, and in these two industries, everything is driven by weight,” he says. In some cases, such as internal components that aren’t visible, appearance is irrelevant, but for other structures aesthetics may be important as well. The new system makes it possible to optimize designs for visual as well as mechanical properties, and in such decisions the human touch is essential.

    As a demonstration of their process in action, the researchers designed a number of structural load-bearing beams, such as might be used in a building or a bridge. In their iterations, they saw that the design has an area that could fail prematurely, so they selected that feature and required the program to address it. The computer system then revised the design accordingly, removing the highlighted strut and strengthening some other struts to compensate, and leading to an improved final design.

    The process, which they call Human-Informed Topology Optimization, begins by setting out the needed specifications — for example, a beam needs to be this length, supported on two points at its ends, and must support this much of a load. “As we’re seeing the structure evolve on the computer screen in response to initial specification,” Carstensen says, “we interrupt the design and ask the user to judge it. The user can select, say, ‘I’m not a fan of this region, I’d like you to beef up or beef down this feature size requirement.’ And then the algorithm takes into account the user input.”

    While the result is not as ideal as what might be produced by a fully rigorous yet significantly slower design algorithm that considers the underlying physics, she says it can be much better than a result generated by a rapid automated design system alone. “You don’t get something that’s quite as good, but that was not necessarily the goal. What we can show is that instead of using several hours to get something, we can use 10 minutes and get something much better than where we started off.”

    The system can be used to optimize a design based on any desired properties, not just strength and weight. For example, it can be used to minimize fracture or buckling, or to reduce stresses in the material by softening corners.

    Carstensen says, “We’re not looking to replace the seven-hour solution. If you have all the time and all the resources in the world, obviously you can run these and it’s going to give you the best solution.” But for many situations, such as designing replacement parts for equipment in a war zone or a disaster-relief area with limited computational power available, “then this kind of solution that catered directly to your needs would prevail.”

    Similarly, for smaller companies manufacturing equipment in essentially “mom and pop” businesses, such a simplified system might be just the ticket. The new system they developed is not only simple and efficient to run on smaller computers, but it also requires far less training to produce useful results, Carstensen says. A basic two-dimensional version of the software, suitable for designing basic beams and structural parts, is freely available now online, she says, as the team continues to develop a full 3D version.

    “The potential applications of Prof Carstensen’s research and tools are quite extraordinary,” says Christian Málaga-Chuquitaype, a professor of civil and environmental engineering at Imperial College London, who was not associated with this work. “With this work, her group is paving the way toward a truly synergistic human-machine design interaction.”

    “By integrating engineering ‘intuition’ (or engineering ‘judgement’) into a rigorous yet computationally efficient topology optimization process, the human engineer is offered the possibility of guiding the creation of optimal structural configurations in a way that was not available to us before,” he adds. “Her findings have the potential to change the way engineers tackle ‘day-to-day’ design tasks.” More

  • in

    Computers that power self-driving cars could be a huge driver of global carbon emissions

    In the future, the energy needed to run the powerful computers on board a global fleet of autonomous vehicles could generate as many greenhouse gas emissions as all the data centers in the world today.

    That is one key finding of a new study from MIT researchers that explored the potential energy consumption and related carbon emissions if autonomous vehicles are widely adopted.

    The data centers that house the physical computing infrastructure used for running applications are widely known for their large carbon footprint: They currently account for about 0.3 percent of global greenhouse gas emissions, or about as much carbon as the country of Argentina produces annually, according to the International Energy Agency. Realizing that less attention has been paid to the potential footprint of autonomous vehicles, the MIT researchers built a statistical model to study the problem. They determined that 1 billion autonomous vehicles, each driving for one hour per day with a computer consuming 840 watts, would consume enough energy to generate about the same amount of emissions as data centers currently do.

    The researchers also found that in over 90 percent of modeled scenarios, to keep autonomous vehicle emissions from zooming past current data center emissions, each vehicle must use less than 1.2 kilowatts of power for computing, which would require more efficient hardware. In one scenario — where 95 percent of the global fleet of vehicles is autonomous in 2050, computational workloads double every three years, and the world continues to decarbonize at the current rate — they found that hardware efficiency would need to double faster than every 1.1 years to keep emissions under those levels.

    “If we just keep the business-as-usual trends in decarbonization and the current rate of hardware efficiency improvements, it doesn’t seem like it is going to be enough to constrain the emissions from computing onboard autonomous vehicles. This has the potential to become an enormous problem. But if we get ahead of it, we could design more efficient autonomous vehicles that have a smaller carbon footprint from the start,” says first author Soumya Sudhakar, a graduate student in aeronautics and astronautics.

    Sudhakar wrote the paper with her co-advisors Vivienne Sze, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE); and Sertac Karaman, associate professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems (LIDS). The research appears today in the January-February issue of IEEE Micro.

    Modeling emissions

    The researchers built a framework to explore the operational emissions from computers on board a global fleet of electric vehicles that are fully autonomous, meaning they don’t require a back-up human driver.

    The model is a function of the number of vehicles in the global fleet, the power of each computer on each vehicle, the hours driven by each vehicle, and the carbon intensity of the electricity powering each computer.

    “On its own, that looks like a deceptively simple equation. But each of those variables contains a lot of uncertainty because we are considering an emerging application that is not here yet,” Sudhakar says.

    For instance, some research suggests that the amount of time driven in autonomous vehicles might increase because people can multitask while driving and the young and the elderly could drive more. But other research suggests that time spent driving might decrease because algorithms could find optimal routes that get people to their destinations faster.

    In addition to considering these uncertainties, the researchers also needed to model advanced computing hardware and software that doesn’t exist yet.

    To accomplish that, they modeled the workload of a popular algorithm for autonomous vehicles, known as a multitask deep neural network because it can perform many tasks at once. They explored how much energy this deep neural network would consume if it were processing many high-resolution inputs from many cameras with high frame rates, simultaneously.

    When they used the probabilistic model to explore different scenarios, Sudhakar was surprised by how quickly the algorithms’ workload added up.

    For example, if an autonomous vehicle has 10 deep neural networks processing images from 10 cameras, and that vehicle drives for one hour a day, it will make 21.6 million inferences each day. One billion vehicles would make 21.6 quadrillion inferences. To put that into perspective, all of Facebook’s data centers worldwide make a few trillion inferences each day (1 quadrillion is 1,000 trillion).

    “After seeing the results, this makes a lot of sense, but it is not something that is on a lot of people’s radar. These vehicles could actually be using a ton of computer power. They have a 360-degree view of the world, so while we have two eyes, they may have 20 eyes, looking all over the place and trying to understand all the things that are happening at the same time,” Karaman says.

    Autonomous vehicles would be used for moving goods, as well as people, so there could be a massive amount of computing power distributed along global supply chains, he says. And their model only considers computing — it doesn’t take into account the energy consumed by vehicle sensors or the emissions generated during manufacturing.

    Keeping emissions in check

    To keep emissions from spiraling out of control, the researchers found that each autonomous vehicle needs to consume less than 1.2 kilowatts of energy for computing. For that to be possible, computing hardware must become more efficient at a significantly faster pace, doubling in efficiency about every 1.1 years.

    One way to boost that efficiency could be to use more specialized hardware, which is designed to run specific driving algorithms. Because researchers know the navigation and perception tasks required for autonomous driving, it could be easier to design specialized hardware for those tasks, Sudhakar says. But vehicles tend to have 10- or 20-year lifespans, so one challenge in developing specialized hardware would be to “future-proof” it so it can run new algorithms.

    In the future, researchers could also make the algorithms more efficient, so they would need less computing power. However, this is also challenging because trading off some accuracy for more efficiency could hamper vehicle safety.

    Now that they have demonstrated this framework, the researchers want to continue exploring hardware efficiency and algorithm improvements. In addition, they say their model can be enhanced by characterizing embodied carbon from autonomous vehicles — the carbon emissions generated when a car is manufactured — and emissions from a vehicle’s sensors.

    While there are still many scenarios to explore, the researchers hope that this work sheds light on a potential problem people may not have considered.

    “We are hoping that people will think of emissions and carbon efficiency as important metrics to consider in their designs. The energy consumption of an autonomous vehicle is really critical, not just for extending the battery life, but also for sustainability,” says Sze.

    This research was funded, in part, by the National Science Foundation and the MIT-Accenture Fellowship. More

  • in

    Manufacturing a cleaner future

    Manufacturing had a big summer. The CHIPS and Science Act, signed into law in August, represents a massive investment in U.S. domestic manufacturing. The act aims to drastically expand the U.S. semiconductor industry, strengthen supply chains, and invest in R&D for new technological breakthroughs. According to John Hart, professor of mechanical engineering and director of the Laboratory for Manufacturing and Productivity at MIT, the CHIPS Act is just the latest example of significantly increased interest in manufacturing in recent years.

    “You have multiple forces working together: reflections from the pandemic’s impact on supply chains, the geopolitical situation around the world, and the urgency and importance of sustainability,” says Hart. “This has now aligned incentives among government, industry, and the investment community to accelerate innovation in manufacturing and industrial technology.”

    Hand-in-hand with this increased focus on manufacturing is a need to prioritize sustainability.

    Roughly one-quarter of greenhouse gas emissions came from industry and manufacturing in 2020. Factories and plants can also deplete local water reserves and generate vast amounts of waste, some of which can be toxic.

    To address these issues and drive the transition to a low-carbon economy, new products and industrial processes must be developed alongside sustainable manufacturing technologies. Hart sees mechanical engineers as playing a crucial role in this transition.

    “Mechanical engineers can uniquely solve critical problems that require next-generation hardware technologies, and know how to bring their solutions to scale,” says Hart.

    Several fast-growing companies founded by faculty and alumni from MIT’s Department of Mechanical Engineering offer solutions for manufacturing’s environmental problem, paving the path for a more sustainable future.

    Gradiant: Cleantech water solutions

    Manufacturing requires water, and lots of it. A medium-sized semiconductor fabrication plant uses upward of 10 million gallons of water a day. In a world increasingly plagued by droughts, this dependence on water poses a major challenge.

    Gradiant offers a solution to this water problem. Co-founded by Anurag Bajpayee SM ’08, PhD ’12 and Prakash Govindan PhD ’12, the company is a pioneer in sustainable — or “cleantech” — water projects.

    As doctoral students in the Rohsenow Kendall Heat Transfer Laboratory, Bajpayee and Govindan shared a pragmatism and penchant for action. They both worked on desalination research — Bajpayee with Professor Gang Chen and Govindan with Professor John Lienhard.

    Inspired by a childhood spent during a severe drought in Chennai, India, Govindan developed for his PhD a humidification-dehumidification technology that mimicked natural rainfall cycles. It was with this piece of technology, which they named Carrier Gas Extraction (CGE), that the duo founded Gradiant in 2013.

    The key to CGE lies in a proprietary algorithm that accounts for variability in the quality and quantity in wastewater feed. At the heart of the algorithm is a nondimensional number, which Govindan proposes one day be called the “Lienhard Number,” after his doctoral advisor.

    “When the water quality varies in the system, our technology automatically sends a signal to motors within the plant to adjust the flow rates to bring back the nondimensional number to a value of one. Once it’s brought back to a value of one, you’re running in optimal condition,” explains Govindan, who serves as chief operating officer of Gradiant.

    This system can treat and clean the wastewater produced by a manufacturing plant for reuse, ultimately conserving millions of gallons of water each year.

    As the company has grown, the Gradiant team has added new technologies to their arsenal, including Selective Contaminant Extraction, a cost-efficient method that removes only specific contaminants, and a brine-concentration method called Counter-Flow Reverse Osmosis. They now offer a full technology stack of water and wastewater treatment solutions to clients in industries including pharmaceuticals, energy, mining, food and beverage, and the ever-growing semiconductor industry.

    “We are an end-to-end water solutions provider. We have a portfolio of proprietary technologies and will pick and choose from our ‘quiver’ depending on a customer’s needs,” says Bajpayee, who serves as CEO of Gradiant. “Customers look at us as their water partner. We can take care of their water problem end-to-end so they can focus on their core business.”

    Gradiant has seen explosive growth over the past decade. With 450 water and wastewater treatment plants built to date, they treat the equivalent of 5 million households’ worth of water each day. Recent acquisitions saw their total employees rise to above 500.

    The diversity of Gradiant’s solutions is reflected in their clients, who include Pfizer, AB InBev, and Coca-Cola. They also count semiconductor giants like Micron Technology, GlobalFoundries, Intel, and TSMC among their customers.

    “Over the last few years, we have really developed our capabilities and reputation serving semiconductor wastewater and semiconductor ultrapure water,” says Bajpayee.

    Semiconductor manufacturers require ultrapure water for fabrication. Unlike drinking water, which has a total dissolved solids range in the parts per million, water used to manufacture microchips has a range in the parts per billion or quadrillion.

    Currently, the average recycling rate at semiconductor fabrication plants — or fabs — in Singapore is only 43 percent. Using Gradiant’s technologies, these fabs can recycle 98-99 percent of the 10 million gallons of water they require daily. This reused water is pure enough to be put back into the manufacturing process.

    “What we’ve done is eliminated the discharge of this contaminated water and nearly eliminated the dependence of the semiconductor fab on the public water supply,” adds Bajpayee.

    With new regulations being introduced, pressure is increasing for fabs to improve their water use, making sustainability even more important to brand owners and their stakeholders.

    As the domestic semiconductor industry expands in light of the CHIPS and Science Act, Gradiant sees an opportunity to bring their semiconductor water treatment technologies to more factories in the United States.

    Via Separations: Efficient chemical filtration

    Like Bajpayee and Govindan, Shreya Dave ’09, SM ’12, PhD ’16 focused on desalination for her doctoral thesis. Under the guidance of her advisor Jeffrey Grossman, professor of materials science and engineering, Dave built a membrane that could enable more efficient and cheaper desalination.

    A thorough cost and market analysis brought Dave to the conclusion that the desalination membrane she developed would not make it to commercialization.

    “The current technologies are just really good at what they do. They’re low-cost, mass produced, and they worked. There was no room in the market for our technology,” says Dave.

    Shortly after defending her thesis, she read a commentary article in the journal Nature that changed everything. The article outlined a problem. Chemical separations that are central to many manufacturing processes require a huge amount of energy. Industry needed more efficient and cheaper membranes. Dave thought she might have a solution.

    After determining there was an economic opportunity, Dave, Grossman, and Brent Keller PhD ’16 founded Via Separations in 2017. Shortly thereafter, they were chosen as one of the first companies to receive funding from MIT’s venture firm, The Engine.

    Currently, industrial filtration is done by heating chemicals at very high temperatures to separate compounds. Dave likens it to making pasta by boiling all of the water off until it evaporates and all you are left with is the pasta noodles. In manufacturing, this method of chemical separation is extremely energy-intensive and inefficient.

    Via Separations has created the chemical equivalent of a “pasta strainer.” Rather than using heat to separate, their membranes “strain” chemical compounds. This method of chemical filtration uses 90 percent less energy than standard methods.

    While most membranes are made of polymers, Via Separations’ membranes are made with graphene oxide, which can withstand high temperatures and harsh conditions. The membrane is calibrated to the customer’s needs by altering the pore size and tuning the surface chemistry.

    Currently, Dave and her team are focusing on the pulp and paper industry as their beachhead market. They have developed a system that makes the recovery of a substance known as “black liquor” more energy efficient.

    “When tree becomes paper, only one-third of the biomass is used for the paper. Currently the most valuable use for the remaining two-thirds not needed for paper is to take it from a pretty dilute stream to a pretty concentrated stream using evaporators by boiling off the water,” says Dave.

    This black liquor is then burned. Most of the resulting energy is used to power the filtration process.

    “This closed-loop system accounts for an enormous amount of energy consumption in the U.S. We can make that process 84 percent more efficient by putting the ‘pasta strainer’ in front of the boiler,” adds Dave.

    VulcanForms: Additive manufacturing at industrial scale

    The first semester John Hart taught at MIT was a fruitful one. He taught a course on 3D printing, broadly known as additive manufacturing (AM). While it wasn’t his main research focus at the time, he found the topic fascinating. So did many of the students in the class, including Martin Feldmann MEng ’14.

    After graduating with his MEng in advanced manufacturing, Feldmann joined Hart’s research group full time. There, they bonded over their shared interest in AM. They saw an opportunity to innovate with an established metal AM technology, known as laser powder bed fusion, and came up with a concept to realize metal AM at an industrial scale.

    The pair co-founded VulcanForms in 2015.

    “We have developed a machine architecture for metal AM that can build parts with exceptional quality and productivity,” says Hart. “And, we have integrated our machines in a fully digital production system, combining AM, postprocessing, and precision machining.”

    Unlike other companies that sell 3D printers for others to produce parts, VulcanForms makes and sells parts for their customers using their fleet of industrial machines. VulcanForms has grown to nearly 400 employees. Last year, the team opened their first production factory, known as “VulcanOne,” in Devens, Massachusetts.

    The quality and precision with which VulcanForms produces parts is critical for products like medical implants, heat exchangers, and aircraft engines. Their machines can print layers of metal thinner than a human hair.

    “We’re producing components that are difficult, or in some cases impossible to manufacture otherwise,” adds Hart, who sits on the company’s board of directors.

    The technologies developed at VulcanForms may help lead to a more sustainable way to manufacture parts and products, both directly through the additive process and indirectly through more efficient, agile supply chains.

    One way that VulcanForms, and AM in general, promotes sustainability is through material savings.

    Many of the materials VulcanForms uses, such as titanium alloys, require a great deal of energy to produce. When titanium parts are 3D-printed, substantially less of the material is used than in a traditional machining process. This material efficiency is where Hart sees AM making a large impact in terms of energy savings.

    Hart also points out that AM can accelerate innovation in clean energy technologies, ranging from more efficient jet engines to future fusion reactors.

    “Companies seeking to de-risk and scale clean energy technologies require know-how and access to advanced manufacturing capability, and industrial additive manufacturing is transformative in this regard,” Hart adds.

    LiquiGlide: Reducing waste by removing friction

    There is an unlikely culprit when it comes to waste in manufacturing and consumer products: friction. Kripa Varanasi, professor of mechanical engineering, and the team at LiquiGlide are on a mission to create a frictionless future, and substantially reduce waste in the process.

    Founded in 2012 by Varanasi and alum David Smith SM ’11, LiquiGlide designs custom coatings that enable liquids to “glide” on surfaces. Every last drop of a product can be used, whether it’s being squeezed out of a tube of toothpaste or drained from a 500-liter tank at a manufacturing plant. Making containers frictionless substantially minimizes wasted product, and eliminates the need to clean a container before recycling or reusing.

    Since launching, the company has found great success in consumer products. Customer Colgate utilized LiquiGlide’s technologies in the design of the Colgate Elixir toothpaste bottle, which has been honored with several industry awards for design. In a collaboration with world- renowned designer Yves Béhar, LiquiGlide is applying their technology to beauty and personal care product packaging. Meanwhile, the U.S. Food and Drug Administration has granted them a Device Master Filing, opening up opportunities for the technology to be used in medical devices, drug delivery, and biopharmaceuticals.

    In 2016, the company developed a system to make manufacturing containers frictionless. Called CleanTanX, the technology is used to treat the surfaces of tanks, funnels, and hoppers, preventing materials from sticking to the side. The system can reduce material waste by up to 99 percent.

    “This could really change the game. It saves wasted product, reduces wastewater generated from cleaning tanks, and can help make the manufacturing process zero-waste,” says Varanasi, who serves as chair at LiquiGlide.

    LiquiGlide works by creating a coating made of a textured solid and liquid lubricant on the container surface. When applied to a container, the lubricant remains infused within the texture. Capillary forces stabilize and allow the liquid to spread on the surface, creating a continuously lubricated surface that any viscous material can slide right down. The company uses a thermodynamic algorithm to determine the combinations of safe solids and liquids depending on the product, whether it’s toothpaste or paint.

    The company has built a robotic spraying system that can treat large vats and tanks at manufacturing plants on site. In addition to saving companies millions of dollars in wasted product, LiquiGlide drastically reduces the amount of water needed to regularly clean these containers, which normally have product stuck to the sides.

    “Normally when you empty everything out of a tank, you still have residue that needs to be cleaned with a tremendous amount of water. In agrochemicals, for example, there are strict regulations about how to deal with the resulting wastewater, which is toxic. All of that can be eliminated with LiquiGlide,” says Varanasi.

    While the closure of many manufacturing facilities early in the pandemic slowed down the rollout of CleanTanX pilots at plants, things have picked up in recent months. As manufacturing ramps up both globally and domestically, Varanasi sees a growing need for LiquiGlide’s technologies, especially for liquids like semiconductor slurry.

    Companies like Gradiant, Via Separations, VulcanForms, and LiquiGlide demonstrate that an expansion in manufacturing industries does not need to come at a steep environmental cost. It is possible for manufacturing to be scaled up in a sustainable way.

    “Manufacturing has always been the backbone of what we do as mechanical engineers. At MIT in particular, there is always a drive to make manufacturing sustainable,” says Evelyn Wang, Ford Professor of Engineering and former head of the Department of Mechanical Engineering. “It’s amazing to see how startups that have an origin in our department are looking at every aspect of the manufacturing process and figuring out how to improve it for the health of our planet.”

    As legislation like the CHIPS and Science Act fuels growth in manufacturing, there will be an increased need for startups and companies that develop solutions to mitigate the environmental impact, bringing us closer to a more sustainable future. More

  • in

    New nanosatellite tests autonomy in space

    In May 2022, a SpaceX Falcon 9 rocket launched the Transporter-5 mission into orbit. The mission contained a collection of micro and nanosatellites from both industry and government, including one from MIT Lincoln Laboratory called the Agile MicroSat (AMS).

    AMS’s primary mission is to test automated maneuvering capabilities in the tumultuous very low-Earth orbit (VLEO) environment, starting at 525 kilometers above the surface and lowering down. VLEO is a challenging location for satellites because the higher air density, coupled with variable space weather, causes increased and unpredictable drag that requires frequent maneuvers to maintain position. Using a commercial off-the-shelf electric-ion propulsion system and custom algorithms, AMS is testing how well it can execute automated navigation and control over an initial mission period of six months.

    “AMS integrates electric propulsion and autonomous navigation and guidance control algorithms that push a lot of the operation of the thruster onto the spacecraft — somewhat like a self-driving car,” says Andrew Stimac, who is the principal investigator for the AMS program and the leader of the laboratory’s Integrated Systems and Concepts Group.

    Stimac sees AMS as a kind of pathfinder mission for the field of small satellite autonomy. Autonomy is essential to support the growing number of small satellite launches for industry and science because it can reduce the cost and labor needed to maintain them, enable missions that call for quick and impromptu responses, and help to avoid collisions in an already-crowded sky.

    AMS is the first-ever test of a nanosatellite with this type of automated maneuvering capability.

    AMS uses an electric propulsion thruster that was selected to meet the size and power constraints of a nanosatellite while providing enough thrust and endurance to enable multiyear missions that operate in VLEO. The flight software, called the Bus Hosted Onboard Software Suite, was designed to autonomously operate the thruster to change the spacecraft’s orbit. Operators on the ground can give AMS a high-level command, such as to descend to and maintain a 300-kilometer orbit, and the software will schedule thruster burns to achieve that command autonomously, using measurements from the onboard GPS receiver as feedback. This experimental software is separate from the bus flight software, which allows AMS to safely test its novel algorithms without endangering the spacecraft.

    “One of the enablers for AMS is the way in which we’ve created this software sandbox onboard the spacecraft,” says Robert Legge, who is another member of the AMS team. “We have our own hosted software that’s running on the primary flight computer, but it’s separate from the critical health and safety avionics software. Basically, you can view this as being a little development environment on the spacecraft where we can test out different algorithms.”

    AMS has two secondary missions called Camera and Beacon. Camera’s mission is to take photos and short video clips of the Earth’s surface while AMS is in different low-Earth orbit positions.

    “One of the things we’re hoping to demonstrate is the ability to respond to current events,” says Rebecca Keenan, who helped to prepare the Camera payload. “We could hear about something that happened, like a fire or flood, and then respond pretty quickly to maneuver the satellite to image it.”

    Keenan and the rest of the AMS team are collaborating with the laboratory’s DisasterSat program, which aims to improve satellite image processing pipelines to help relief agencies respond to disasters more quickly. Small satellites that could schedule operations on-demand, rather than planning them months in advance before launch, could be a great asset to disaster response efforts.

    The other payload, Beacon, is testing new adaptive optics capabilities for tracking fast-moving targets by sending laser light from the moving satellite to a ground station at the laboratory’s Haystack Observatory in Westford, Massachusetts. Enabling precise laser pointing from an agile satellite could aid many different types of space missions, such as communications and tracking space debris. It could also be used for emerging programs such as Breakthrough Starshot, which is developing a satellite that can accelerate to high speeds using a laser-propelled lightsail.

    “As far as we know, this is the first on-orbit artificial guide star that has launched for a dedicated adaptive optics purpose,” says Lulu Liu, who worked on the Beacon payload. “Theoretically, the laser it carries can be maneuvered into position on other spacecraft to support a large number of science missions in different regions of the sky.”

    The team developed Beacon with a strict budget and timeline and hope that its success will shorten the design and test loop of next-generation laser transmitter systems. “The idea is that we could have a number of these flying in the sky at once, and a ground system can point to one of them and get near-real-time feedback on its performance,” says Liu.

    AMS weighs under 12 kilograms with 6U dimensions (23 x 11 x 36 centimeters). The bus was designed by Blue Canyon Technologies and the thruster was designed by Enpulsion GmbH.

    Legge says that the AMS program was approached as an opportunity for Lincoln Laboratory to showcase its ability to conduct work in the space domain quickly and flexibly. Some major roadblocks to rapid development of new space technology have been long timelines, high costs, and the extremely low risk tolerance associated with traditional space programs. “We wanted to show that we can really do rapid prototyping and testing of space hardware and software on orbit at an affordable cost,” Legge says.

    “AMS shows the value and fast time-to-orbit afforded by teaming with rapid space commercial partners for spacecraft core bus technologies and launch and ground segment operations, while allowing the laboratory to focus on innovative mission concepts, advanced components and payloads, and algorithms and processing software,” says Dan Cousins, who is the program manager for AMS. “The AMS team appreciates the support from the laboratory’s Technology Office for allowing us to showcase an effective operating model for rapid space programs.”

    AMS took its first image on June 1, completed its thruster commissioning in July, and has begun to descend toward its target VLEO position. More

  • in

    A new method boosts wind farms’ energy output, without new equipment

    Virtually all wind turbines, which produce more than 5 percent of the world’s electricity, are controlled as if they were individual, free-standing units. In fact, the vast majority are part of larger wind farm installations involving dozens or even hundreds of turbines, whose wakes can affect each other.

    Now, engineers at MIT and elsewhere have found that, with no need for any new investment in equipment, the energy output of such wind farm installations can be increased by modeling the wind flow of the entire collection of turbines and optimizing the control of individual units accordingly.

    The increase in energy output from a given installation may seem modest — it’s about 1.2 percent overall, and 3 percent for optimal wind speeds. But the algorithm can be deployed at any wind farm, and the number of wind farms is rapidly growing to meet accelerated climate goals. If that 1.2 percent energy increase were applied to all the world’s existing wind farms, it would be the equivalent of adding more than 3,600 new wind turbines, or enough to power about 3 million homes, and a total gain to power producers of almost a billion dollars per year, the researchers say. And all of this for essentially no cost.

    The research is published today in the journal Nature Energy, in a study led by MIT Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering Michael F. Howland.

    “Essentially all existing utility-scale turbines are controlled ‘greedily’ and independently,” says Howland. The term “greedily,” he explains, refers to the fact that they are controlled to maximize only their own power production, as if they were isolated units with no detrimental impact on neighboring turbines.

    But in the real world, turbines are deliberately spaced close together in wind farms to achieve economic benefits related to land use (on- or offshore) and to infrastructure such as access roads and transmission lines. This proximity means that turbines are often strongly affected by the turbulent wakes produced by others that are upwind from them — a factor that individual turbine-control systems do not currently take into account.

    “From a flow-physics standpoint, putting wind turbines close together in wind farms is often the worst thing you could do,” Howland says. “The ideal approach to maximize total energy production would be to put them as far apart as possible,” but that would increase the associated costs.

    That’s where the work of Howland and his collaborators comes in. They developed a new flow model which predicts the power production of each turbine in the farm depending on the incident winds in the atmosphere and the control strategy of each turbine. While based on flow-physics, the model learns from operational wind farm data to reduce predictive error and uncertainty. Without changing anything about the physical turbine locations and hardware systems of existing wind farms, they have used the physics-based, data-assisted modeling of the flow within the wind farm and the resulting power production of each turbine, given different wind conditions, to find the optimal orientation for each turbine at a given moment. This allows them to maximize the output from the whole farm, not just the individual turbines.

    Today, each turbine constantly senses the incoming wind direction and speed and uses its internal control software to adjust its yaw (vertical axis) angle position to align as closely as possible to the wind. But in the new system, for example, the team has found that by turning one turbine just slightly away from its own maximum output position — perhaps 20 degrees away from its individual peak output angle — the resulting increase in power output from one or more downwind units will more than make up for the slight reduction in output from the first unit. By using a centralized control system that takes all of these interactions into account, the collection of turbines was operated at power output levels that were as much as 32 percent higher under some conditions.

    In a months-long experiment in a real utility-scale wind farm in India, the predictive model was first validated by testing a wide range of yaw orientation strategies, most of which were intentionally suboptimal. By testing many control strategies, including suboptimal ones, in both the real farm and the model, the researchers could identify the true optimal strategy. Importantly, the model was able to predict the farm power production and the optimal control strategy for most wind conditions tested, giving confidence that the predictions of the model would track the true optimal operational strategy for the farm. This enables the use of the model to design the optimal control strategies for new wind conditions and new wind farms without needing to perform fresh calculations from scratch.

    Then, a second months-long experiment at the same farm, which implemented only the optimal control predictions from the model, proved that the algorithm’s real-world effects could match the overall energy improvements seen in simulations. Averaged over the entire test period, the system achieved a 1.2 percent increase in energy output at all wind speeds, and a 3 percent increase at speeds between 6 and 8 meters per second (about 13 to 18 miles per hour).

    While the test was run at one wind farm, the researchers say the model and cooperative control strategy can be implemented at any existing or future wind farm. Howland estimates that, translated to the world’s existing fleet of wind turbines, a 1.2 percent overall energy improvement would produce  more than 31 terawatt-hours of additional electricity per year, approximately equivalent to installing an extra 3,600 wind turbines at no cost. This would translate into some $950 million in extra revenue for the wind farm operators per year, he says.

    The amount of energy to be gained will vary widely from one wind farm to another, depending on an array of factors including the spacing of the units, the geometry of their arrangement, and the variations in wind patterns at that location over the course of a year. But in all cases, the model developed by this team can provide a clear prediction of exactly what the potential gains are for a given site, Howland says. “The optimal control strategy and the potential gain in energy will be different at every wind farm, which motivated us to develop a predictive wind farm model which can be used widely, for optimization across the wind energy fleet,” he adds.

    But the new system can potentially be adopted quickly and easily, he says. “We don’t require any additional hardware installation. We’re really just making a software change, and there’s a significant potential energy increase associated with it.” Even a 1 percent improvement, he points out, means that in a typical wind farm of about 100 units, operators could get the same output with one fewer turbine, thus saving the costs, usually millions of dollars, associated with purchasing, building, and installing that unit.

    Further, he notes, by reducing wake losses the algorithm could make it possible to place turbines more closely together within future wind farms, therefore increasing the power density of wind energy, saving on land (or sea) footprints. This power density increase and footprint reduction could help to achieve pressing greenhouse gas emission reduction goals, which call for a substantial expansion of wind energy deployment, both on and offshore.

    What’s more, he says, the biggest new area of wind farm development is offshore, and “the impact of wake losses is often much higher in offshore wind farms.” That means the impact of this new approach to controlling those wind farms could be significantly greater.

    The Howland Lab and the international team is continuing to refine the models further and working to improve the operational instructions they derive from the model, moving toward autonomous, cooperative control and striving for the greatest possible power output from a given set of conditions, Howland says.

    The research team includes Jesús Bas Quesada, Juan José Pena Martinez, and Felipe Palou Larrañaga of Siemens Gamesa Renewable Energy Innovation and Technology in Navarra, Spain; Neeraj Yadav and Jasvipul Chawla at ReNew Power Private Limited in Haryana, India; Varun Sivaram formerly at ReNew Power Private Limited in Haryana, India and presently at the Office of the U.S. Special Presidential Envoy for Climate, United States Department of State; and John Dabiri at California Institute of Technology. The work was supported by the MIT Energy Initiative and Siemens Gamesa Renewable Energy. More

  • in

    How can we reduce the carbon footprint of global computing?

    The voracious appetite for energy from the world’s computers and communications technology presents a clear threat for the globe’s warming climate. That was the blunt assessment from presenters in the intensive two-day Climate Implications of Computing and Communications workshop held on March 3 and 4, hosted by MIT’s Climate and Sustainability Consortium (MCSC), MIT-IBM Watson AI Lab, and the Schwarzman College of Computing.

    The virtual event featured rich discussions and highlighted opportunities for collaboration among an interdisciplinary group of MIT faculty and researchers and industry leaders across multiple sectors — underscoring the power of academia and industry coming together.

    “If we continue with the existing trajectory of compute energy, by 2040, we are supposed to hit the world’s energy production capacity. The increase in compute energy and demand has been increasing at a much faster rate than the world energy production capacity increase,” said Bilge Yildiz, the Breene M. Kerr Professor in the MIT departments of Nuclear Science and Engineering and Materials Science and Engineering, one of the workshop’s 18 presenters. This computing energy projection draws from the Semiconductor Research Corporations’s decadal report.To cite just one example: Information and communications technology already account for more than 2 percent of global energy demand, which is on a par with the aviation industries emissions from fuel.“We are the very beginning of this data-driven world. We really need to start thinking about this and act now,” said presenter Evgeni Gousev, senior director at Qualcomm.  Innovative energy-efficiency optionsTo that end, the workshop presentations explored a host of energy-efficiency options, including specialized chip design, data center architecture, better algorithms, hardware modifications, and changes in consumer behavior. Industry leaders from AMD, Ericsson, Google, IBM, iRobot, NVIDIA, Qualcomm, Tertill, Texas Instruments, and Verizon outlined their companies’ energy-saving programs, while experts from across MIT provided insight into current research that could yield more efficient computing.Panel topics ranged from “Custom hardware for efficient computing” to “Hardware for new architectures” to “Algorithms for efficient computing,” among others.

    Visual representation of the conversation during the workshop session entitled “Energy Efficient Systems.”

    Image: Haley McDevitt

    Previous item
    Next item

    The goal, said Yildiz, is to improve energy efficiency associated with computing by more than a million-fold.“I think part of the answer of how we make computing much more sustainable has to do with specialized architectures that have very high level of utilization,” said Darío Gil, IBM senior vice president and director of research, who stressed that solutions should be as “elegant” as possible.  For example, Gil illustrated an innovative chip design that uses vertical stacking to reduce the distance data has to travel, and thus reduces energy consumption. Surprisingly, more effective use of tape — a traditional medium for primary data storage — combined with specialized hard drives (HDD), can yield a dramatic savings in carbon dioxide emissions.Gil and presenters Bill Dally, chief scientist and senior vice president of research of NVIDIA; Ahmad Bahai, CTO of Texas Instruments; and others zeroed in on storage. Gil compared data to a floating iceberg in which we can have fast access to the “hot data” of the smaller visible part while the “cold data,” the large underwater mass, represents data that tolerates higher latency. Think about digital photo storage, Gil said. “Honestly, are you really retrieving all of those photographs on a continuous basis?” Storage systems should provide an optimized mix of of HDD for hot data and tape for cold data based on data access patterns.Bahai stressed the significant energy saving gained from segmenting standby and full processing. “We need to learn how to do nothing better,” he said. Dally spoke of mimicking the way our brain wakes up from a deep sleep, “We can wake [computers] up much faster, so we don’t need to keep them running in full speed.”Several workshop presenters spoke of a focus on “sparsity,” a matrix in which most of the elements are zero, as a way to improve efficiency in neural networks. Or as Dally said, “Never put off till tomorrow, where you could put off forever,” explaining efficiency is not “getting the most information with the fewest bits. It’s doing the most with the least energy.”Holistic and multidisciplinary approaches“We need both efficient algorithms and efficient hardware, and sometimes we need to co-design both the algorithm and the hardware for efficient computing,” said Song Han, a panel moderator and assistant professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT.Some presenters were optimistic about innovations already underway. According to Ericsson’s research, as much as 15 percent of the carbon emissions globally can be reduced through the use of existing solutions, noted Mats Pellbäck Scharp, head of sustainability at Ericsson. For example, GPUs are more efficient than CPUs for AI, and the progression from 3G to 5G networks boosts energy savings.“5G is the most energy efficient standard ever,” said Scharp. “We can build 5G without increasing energy consumption.”Companies such as Google are optimizing energy use at their data centers through improved design, technology, and renewable energy. “Five of our data centers around the globe are operating near or above 90 percent carbon-free energy,” said Jeff Dean, Google’s senior fellow and senior vice president of Google Research.Yet, pointing to the possible slowdown in the doubling of transistors in an integrated circuit — or Moore’s Law — “We need new approaches to meet this compute demand,” said Sam Naffziger, AMD senior vice president, corporate fellow, and product technology architect. Naffziger spoke of addressing performance “overkill.” For example, “we’re finding in the gaming and machine learning space we can make use of lower-precision math to deliver an image that looks just as good with 16-bit computations as with 32-bit computations, and instead of legacy 32b math to train AI networks, we can use lower-energy 8b or 16b computations.”

    Visual representation of the conversation during the workshop session entitled “Wireless, networked, and distributed systems.”

    Image: Haley McDevitt

    Previous item
    Next item

    Other presenters singled out compute at the edge as a prime energy hog.“We also have to change the devices that are put in our customers’ hands,” said Heidi Hemmer, senior vice president of engineering at Verizon. As we think about how we use energy, it is common to jump to data centers — but it really starts at the device itself, and the energy that the devices use. Then, we can think about home web routers, distributed networks, the data centers, and the hubs. “The devices are actually the least energy-efficient out of that,” concluded Hemmer.Some presenters had different perspectives. Several called for developing dedicated silicon chipsets for efficiency. However, panel moderator Muriel Medard, the Cecil H. Green Professor in EECS, described research at MIT, Boston University, and Maynooth University on the GRAND (Guessing Random Additive Noise Decoding) chip, saying, “rather than having obsolescence of chips as the new codes come in and in different standards, you can use one chip for all codes.”Whatever the chip or new algorithm, Helen Greiner, CEO of Tertill (a weeding robot) and co-founder of iRobot, emphasized that to get products to market, “We have to learn to go away from wanting to get the absolute latest and greatest, the most advanced processor that usually is more expensive.” She added, “I like to say robot demos are a dime a dozen, but robot products are very infrequent.”Greiner emphasized consumers can play a role in pushing for more energy-efficient products — just as drivers began to demand electric cars.Dean also sees an environmental role for the end user.“We have enabled our cloud customers to select which cloud region they want to run their computation in, and they can decide how important it is that they have a low carbon footprint,” he said, also citing other interfaces that might allow consumers to decide which air flights are more efficient or what impact installing a solar panel on their home would have.However, Scharp said, “Prolonging the life of your smartphone or tablet is really the best climate action you can do if you want to reduce your digital carbon footprint.”Facing increasing demandsDespite their optimism, the presenters acknowledged the world faces increasing compute demand from machine learning, AI, gaming, and especially, blockchain. Panel moderator Vivienne Sze, associate professor in EECS, noted the conundrum.“We can do a great job in making computing and communication really efficient. But there is this tendency that once things are very efficient, people use more of it, and this might result in an overall increase in the usage of these technologies, which will then increase our overall carbon footprint,” Sze said.Presenters saw great potential in academic/industry partnerships, particularly from research efforts on the academic side. “By combining these two forces together, you can really amplify the impact,” concluded Gousev.Presenters at the Climate Implications of Computing and Communications workshop also included: Joel Emer, professor of the practice in EECS at MIT; David Perreault, the Joseph F. and Nancy P. Keithley Professor of EECS at MIT; Jesús del Alamo, MIT Donner Professor and professor of electrical engineering in EECS at MIT; Heike Riel, IBM Fellow and head science and technology at IBM; and Takashi Ando, principal research staff member at IBM Research. The recorded workshop sessions are available on YouTube. More

  • in

    Machine learning, harnessed to extreme computing, aids fusion energy development

    MIT research scientists Pablo Rodriguez-Fernandez and Nathan Howard have just completed one of the most demanding calculations in fusion science — predicting the temperature and density profiles of a magnetically confined plasma via first-principles simulation of plasma turbulence. Solving this problem by brute force is beyond the capabilities of even the most advanced supercomputers. Instead, the researchers used an optimization methodology developed for machine learning to dramatically reduce the CPU time required while maintaining the accuracy of the solution.

    Fusion energyFusion offers the promise of unlimited, carbon-free energy through the same physical process that powers the sun and the stars. It requires heating the fuel to temperatures above 100 million degrees, well above the point where the electrons are stripped from their atoms, creating a form of matter called plasma. On Earth, researchers use strong magnetic fields to isolate and insulate the hot plasma from ordinary matter. The stronger the magnetic field, the better the quality of the insulation that it provides.

    Rodriguez-Fernandez and Howard have focused on predicting the performance expected in the SPARC device, a compact, high-magnetic-field fusion experiment, currently under construction by the MIT spin-out company Commonwealth Fusion Systems (CFS) and researchers from MIT’s Plasma Science and Fusion Center. While the calculation required an extraordinary amount of computer time, over 8 million CPU-hours, what was remarkable was not how much time was used, but how little, given the daunting computational challenge.

    The computational challenge of fusion energyTurbulence, which is the mechanism for most of the heat loss in a confined plasma, is one of the science’s grand challenges and the greatest problem remaining in classical physics. The equations that govern fusion plasmas are well known, but analytic solutions are not possible in the regimes of interest, where nonlinearities are important and solutions encompass an enormous range of spatial and temporal scales. Scientists resort to solving the equations by numerical simulation on computers. It is no accident that fusion researchers have been pioneers in computational physics for the last 50 years.

    One of the fundamental problems for researchers is reliably predicting plasma temperature and density given only the magnetic field configuration and the externally applied input power. In confinement devices like SPARC, the external power and the heat input from the fusion process are lost through turbulence in the plasma. The turbulence itself is driven by the difference in the extremely high temperature of the plasma core and the relatively cool temperatures of the plasma edge (merely a few million degrees). Predicting the performance of a self-heated fusion plasma therefore requires a calculation of the power balance between the fusion power input and the losses due to turbulence.

    These calculations generally start by assuming plasma temperature and density profiles at a particular location, then computing the heat transported locally by turbulence. However, a useful prediction requires a self-consistent calculation of the profiles across the entire plasma, which includes both the heat input and turbulent losses. Directly solving this problem is beyond the capabilities of any existing computer, so researchers have developed an approach that stitches the profiles together from a series of demanding but tractable local calculations. This method works, but since the heat and particle fluxes depend on multiple parameters, the calculations can be very slow to converge.

    However, techniques emerging from the field of machine learning are well suited to optimize just such a calculation. Starting with a set of computationally intensive local calculations run with the full-physics, first-principles CGYRO code (provided by a team from General Atomics led by Jeff Candy) Rodriguez-Fernandez and Howard fit a surrogate mathematical model, which was used to explore and optimize a search within the parameter space. The results of the optimization were compared to the exact calculations at each optimum point, and the system was iterated to a desired level of accuracy. The researchers estimate that the technique reduced the number of runs of the CGYRO code by a factor of four.

    New approach increases confidence in predictionsThis work, described in a recent publication in the journal Nuclear Fusion, is the highest fidelity calculation ever made of the core of a fusion plasma. It refines and confirms predictions made with less demanding models. Professor Jonathan Citrin, of the Eindhoven University of Technology and leader of the fusion modeling group for DIFFER, the Dutch Institute for Fundamental Energy Research, commented: “The work significantly accelerates our capabilities in more routinely performing ultra-high-fidelity tokamak scenario prediction. This algorithm can help provide the ultimate validation test of machine design or scenario optimization carried out with faster, more reduced modeling, greatly increasing our confidence in the outcomes.” 

    In addition to increasing confidence in the fusion performance of the SPARC experiment, this technique provides a roadmap to check and calibrate reduced physics models, which run with a small fraction of the computational power. Such models, cross-checked against the results generated from turbulence simulations, will provide a reliable prediction before each SPARC discharge, helping to guide experimental campaigns and improving the scientific exploitation of the device. It can also be used to tweak and improve even simple data-driven models, which run extremely quickly, allowing researchers to sift through enormous parameter ranges to narrow down possible experiments or possible future machines.

    The research was funded by CFS, with computational support from the National Energy Research Scientific Computing Center, a U.S. Department of Energy Office of Science User Facility. More

  • in

    Improving predictions of sea level rise for the next century

    When we think of climate change, one of the most dramatic images that comes to mind is the loss of glacial ice. As the Earth warms, these enormous rivers of ice become a casualty of the rising temperatures. But, as ice sheets retreat, they also become an important contributor to one the more dangerous outcomes of climate change: sea-level rise. At MIT, an interdisciplinary team of scientists is determined to improve sea level rise predictions for the next century, in part by taking a closer look at the physics of ice sheets.

    Last month, two research proposals on the topic, led by Brent Minchew, the Cecil and Ida Green Career Development Professor in the Department of Earth, Atmospheric and Planetary Sciences (EAPS), were announced as finalists in the MIT Climate Grand Challenges initiative. Launched in July 2020, Climate Grand Challenges fielded almost 100 project proposals from collaborators across the Institute who heeded the bold charge: to develop research and innovations that will deliver game-changing advances in the world’s efforts to address the climate challenge.

    As finalists, Minchew and his collaborators from the departments of Urban Studies and Planning, Economics, Civil and Environmental Engineering, the Haystack Observatory, and external partners, received $100,000 to develop their research plans. A subset of the 27 proposals tapped as finalists will be announced next month, making up a portfolio of multiyear “flagship” projects receiving additional funding and support.

    One goal of both Minchew proposals is to more fully understand the most fundamental processes that govern rapid changes in glacial ice, and to use that understanding to build next-generation models that are more predictive of ice sheet behavior as they respond to, and influence, climate change.

    “We need to develop more accurate and computationally efficient models that provide testable projections of sea-level rise over the coming decades. To do so quickly, we want to make better and more frequent observations and learn the physics of ice sheets from these data,” says Minchew. “For example, how much stress do you have to apply to ice before it breaks?”

    Currently, Minchew’s Glacier Dynamics and Remote Sensing group uses satellites to observe the ice sheets on Greenland and Antarctica primarily with interferometric synthetic aperture radar (InSAR). But the data are often collected over long intervals of time, which only gives them “before and after” snapshots of big events. By taking more frequent measurements on shorter time scales, such as hours or days, they can get a more detailed picture of what is happening in the ice.

    “Many of the key unknowns in our projections of what ice sheets are going to look like in the future, and how they’re going to evolve, involve the dynamics of glaciers, or our understanding of how the flow speed and the resistances to flow are related,” says Minchew.

    At the heart of the two proposals is the creation of SACOS, the Stratospheric Airborne Climate Observatory System. The group envisions developing solar-powered drones that can fly in the stratosphere for months at a time, taking more frequent measurements using a new lightweight, low-power radar and other high-resolution instrumentation. They also propose air-dropping sensors directly onto the ice, equipped with seismometers and GPS trackers to measure high-frequency vibrations in the ice and pinpoint the motions of its flow.

    How glaciers contribute to sea level rise

    Current climate models predict an increase in sea levels over the next century, but by just how much is still unclear. Estimates are anywhere from 20 centimeters to two meters, which is a large difference when it comes to enacting policy or mitigation. Minchew points out that response measures will be different, depending on which end of the scale it falls toward. If it’s closer to 20 centimeters, coastal barriers can be built to protect low-level areas. But with higher surges, such measures become too expensive and inefficient to be viable, as entire portions of cities and millions of people would have to be relocated.

    “If we’re looking at a future where we could get more than a meter of sea level rise by the end of the century, then we need to know about that sooner rather than later so that we can start to plan and to do our best to prepare for that scenario,” he says.

    There are two ways glaciers and ice sheets contribute to rising sea levels: direct melting of the ice and accelerated transport of ice to the oceans. In Antarctica, warming waters melt the margins of the ice sheets, which tends to reduce the resistive stresses and allow ice to flow more quickly to the ocean. This thinning can also cause the ice shelves to be more prone to fracture, facilitating the calving of icebergs — events which sometimes cause even further acceleration of ice flow.

    Using data collected by SACOS, Minchew and his group can better understand what material properties in the ice allow for fracturing and calving of icebergs, and build a more complete picture of how ice sheets respond to climate forces. 

    “What I want is to reduce and quantify the uncertainties in projections of sea level rise out to the year 2100,” he says.

    From that more complete picture, the team — which also includes economists, engineers, and urban planning specialists — can work on developing predictive models and methods to help communities and governments estimate the costs associated with sea level rise, develop sound infrastructure strategies, and spur engineering innovation.

    Understanding glacier dynamics

    More frequent radar measurements and the collection of higher-resolution seismic and GPS data will allow Minchew and the team to develop a better understanding of the broad category of glacier dynamics — including calving, an important process in setting the rate of sea level rise which is currently not well understood.  

    “Some of what we’re doing is quite similar to what seismologists do,” he says. “They measure seismic waves following an earthquake, or a volcanic eruption, or things of this nature and use those observations to better understand the mechanisms that govern these phenomena.”

    Air-droppable sensors will help them collect information about ice sheet movement, but this method comes with drawbacks — like installation and maintenance, which is difficult to do out on a massive ice sheet that is moving and melting. Also, the instruments can each only take measurements at a single location. Minchew equates it to a bobber in water: All it can tell you is how the bobber moves as the waves disturb it.

    But by also taking continuous radar measurements from the air, Minchew’s team can collect observations both in space and in time. Instead of just watching the bobber in the water, they can effectively make a movie of the waves propagating out, as well as visualize processes like iceberg calving happening in multiple dimensions.

    Once the bobbers are in place and the movies recorded, the next step is developing machine learning algorithms to help analyze all the new data being collected. While this data-driven kind of discovery has been a hot topic in other fields, this is the first time it has been applied to glacier research.

    “We’ve developed this new methodology to ingest this huge amount of data,” he says, “and from that create an entirely new way of analyzing the system to answer these fundamental and critically important questions.”  More