More stories

  • in

    Moving past the Iron Age

    MIT graduate student Sydney Rose Johnson has never seen the steel mills in central India. She’s never toured the American Midwest’s hulking steel plants or the mini mills dotting the Mississippi River. But in the past year, she’s become more familiar with steel production than she ever imagined.

    A fourth-year dual degree MBA and PhD candidate in chemical engineering and a graduate research assistant with the MIT Energy Initiative (MITEI) as well as a 2022-23 Shell Energy Fellow, Johnson looks at ways to reduce carbon dioxide (CO2) emissions generated by industrial processes in hard-to-abate industries. Those include steel.

    Almost every aspect of infrastructure and transportation — buildings, bridges, cars, trains, mass transit — contains steel. The manufacture of steel hasn’t changed much since the Iron Age, with some steel plants in the United States and India operating almost continually for more than a century, their massive blast furnaces re-lined periodically with carbon and graphite to keep them going.

    According to the World Economic Forum, steel demand is projected to increase 30 percent by 2050, spurred in part by population growth and economic development in China, India, Africa, and Southeast Asia.

    The steel industry is among the three biggest producers of CO2 worldwide. Every ton of steel produced in 2020 emitted, on average, 1.89 tons of CO2 into the atmosphere — around 8 percent of global CO2 emissions, according to the World Steel Association.

    A combination of technical strategies and financial investments, Johnson notes, will be needed to wrestle that 8 percent figure down to something more planet-friendly.

    Johnson’s thesis focuses on modeling and analyzing ways to decarbonize steel. Using data mined from academic and industry sources, she builds models to calculate emissions, costs, and energy consumption for plant-level production.

    “I optimize steel production pathways using emission goals, industry commitments, and cost,” she says. Based on the projected growth of India’s steel industry, she applies this approach to case studies that predict outcomes for some of the country’s thousand-plus factories, which together have a production capacity of 154 million metric tons of steel. For the United States, she looks at the effect of Inflation Reduction Act (IRA) credits. The 2022 IRA provides incentives that could accelerate the steel industry’s efforts to minimize its carbon emissions.

    Johnson compares emissions and costs across different production pathways, asking questions such as: “If we start today, what would a cost-optimal production scenario look like years from now? How would it change if we added in credits? What would have to happen to cut 2005 levels of emissions in half by 2030?”

    “My goal is to gain an understanding of how current and emerging decarbonization strategies will be integrated into the industry,” Johnson says.

    Grappling with industrial problems

    Growing up in Marietta, Georgia, outside Atlanta, the closest she ever came to a plant of any kind was through her father, a chemical engineer working in logistics and procuring steel for an aerospace company, and during high school, when she spent a semester working alongside chemical engineers tweaking the pH of an anti-foaming agent.

    At Kennesaw Mountain High School, a STEM magnet program in Cobb County, students devote an entire semester of their senior year to an internship and research project.

    Johnson chose to work at Kemira Chemicals, which develops chemical solutions for water-intensive industries with a focus on pulp and paper, water treatment, and energy systems.

    “My goal was to understand why a polymer product was falling out of suspension — essentially, why it was less stable,” she recalls. She learned how to formulate a lab-scale version of the product and conduct tests to measure its viscosity and acidity. Comparing the lab-scale and regular product results revealed that acidity was an important factor. “Through conversations with my mentor, I learned this was connected with the holding conditions, which led to the product being oxidized,” she says. With the anti-foaming agent’s problem identified, steps could be taken to fix it.

    “I learned how to apply problem-solving. I got to learn more about working in an industrial environment by connecting with the team in quality control as well as with R&D and chemical engineers at the plant site,” Johnson says. “This experience confirmed I wanted to pursue engineering in college.”

    As an undergraduate at Stanford University, she learned about the different fields — biotechnology, environmental science, electrochemistry, and energy, among others — open to chemical engineers. “It seemed like a very diverse field and application range,” she says. “I was just so intrigued by the different things I saw people doing and all these different sets of issues.”

    Turning up the heat

    At MIT, she turned her attention to how certain industries can offset their detrimental effects on climate.

    “I’m interested in the impact of technology on global communities, the environment, and policy. Energy applications affect every field. My goal as a chemical engineer is to have a broad perspective on problem-solving and to find solutions that benefit as many people, especially those under-resourced, as possible,” says Johnson, who has served on the MIT Chemical Engineering Graduate Student Advisory Board, the MIT Energy and Climate Club, and is involved with diversity and inclusion initiatives.

    The steel industry, Johnson acknowledges, is not what she first imagined when she saw herself working toward mitigating climate change.

    “But now, understanding the role the material has in infrastructure development, combined with its heavy use of coal, has illuminated how the sector, along with other hard-to-abate industries, is important in the climate change conversation,” Johnson says.

    Despite the advanced age of many steel mills, some are quite energy-efficient, she notes. Yet these operations, which produce heat upwards of 3,000 degrees Fahrenheit, are still emission-intensive.

    Steel is made from iron ore, a mixture of iron, oxygen, and other minerals found on virtually every continent, with Brazil and Australia alone exporting millions of metric tons per year. Commonly based on a process dating back to the 19th century, iron is extracted from the ore through smelting — heating the ore with blast furnaces until the metal becomes spongy and its chemical components begin to break down.

    A reducing agent is needed to release the oxygen trapped in the ore, transforming it from its raw form to pure iron. That’s where most emissions come from, Johnson notes.

    “We want to reduce emissions, and we want to make a cleaner and safer environment for everyone,” she says. “It’s not just the CO2 emissions. It’s also sometimes NOx and SOx [nitrogen oxides and sulfur oxides] and air pollution particulate matter at some of these production facilities that can affect people as well.”

    In 2020, the International Energy Agency released a roadmap exploring potential technologies and strategies that would make the iron and steel sector more compatible with the agency’s vision of increased sustainability. Emission reductions can be accomplished with more modern technology, the agency suggests, or by substituting the fuels producing the immense heat needed to process ore. Traditionally, the fuels used for iron reduction have been coal and natural gas. Alternative fuels include clean hydrogen, electricity, and biomass.

    Using the MITEI Sustainable Energy System Analysis Modeling Environment (SESAME), Johnson analyzes various decarbonization strategies. She considers options such as switching fuel for furnaces to hydrogen with a little bit of natural gas or adding carbon-capture devices. The models demonstrate how effective these tactics are likely to be. The answers aren’t always encouraging.

    “Upstream emissions can determine how effective the strategies are,” Johnson says. Charcoal derived from forestry biomass seemed to be a promising alternative fuel, but her models showed that processing the charcoal for use in the blast furnace limited its effectiveness in negating emissions.

    Despite the challenges, “there are definitely ways of moving forward,” Johnson says. “It’s been an intriguing journey in terms of understanding where the industry is at. There’s still a long way to go, but it’s doable.”

    Johnson is heartened by the steel industry’s efforts to recycle scrap into new steel products and incorporate more emission-friendly technologies and practices, some of which result in significantly lower CO2 emissions than conventional production.

    A major issue is that low-carbon steel can be more than 50 percent more costly than conventionally produced steel. “There are costs associated with making the transition, but in the context of the environmental implications, I think it’s well worth it to adopt these technologies,” she says.

    After graduation, Johnson plans to continue to work in the energy field. “I definitely want to use a combination of engineering knowledge and business knowledge to work toward mitigating climate change, potentially in the startup space with clean technology or even in a policy context,” she says. “I’m interested in connecting the private and public sectors to implement measures for improving our environment and benefiting as many people as possible.” More

  • in

    Generative AI for smart grid modeling

    MIT’s Laboratory for Information and Decision Systems (LIDS) has been awarded $1,365,000 in funding from the Appalachian Regional Commission (ARC) to support its involvement with an innovative project, “Forming the Smart Grid Deployment Consortium (SGDC) and Expanding the HILLTOP+ Platform.”

    The grant was made available through ARC’s Appalachian Regional Initiative for Stronger Economies, which fosters regional economic transformation through multi-state collaboration.

    Led by Kalyan Veeramachaneni, research scientist and principal investigator at LIDS’ Data to AI Group, the project will focus on creating AI-driven generative models for customer load data. Veeramachaneni and colleagues will work alongside a team of universities and organizations led by Tennessee Tech University, including collaborators across Ohio, Pennsylvania, West Virginia, and Tennessee, to develop and deploy smart grid modeling services through the SGDC project.

    These generative models have far-reaching applications, including grid modeling and training algorithms for energy tech startups. When the models are trained on existing data, they create additional, realistic data that can augment limited datasets or stand in for sensitive ones. Stakeholders can then use these models to understand and plan for specific what-if scenarios far beyond what could be achieved with existing data alone. For example, generated data can predict the potential load on the grid if an additional 1,000 households were to adopt solar technologies, how that load might change throughout the day, and similar contingencies vital to future planning.

    The generative AI models developed by Veeramachaneni and his team will provide inputs to modeling services based on the HILLTOP+ microgrid simulation platform, originally prototyped by MIT Lincoln Laboratory. HILLTOP+ will be used to model and test new smart grid technologies in a virtual “safe space,” providing rural electric utilities with increased confidence in deploying smart grid technologies, including utility-scale battery storage. Energy tech startups will also benefit from HILLTOP+ grid modeling services, enabling them to develop and virtually test their smart grid hardware and software products for scalability and interoperability.

    The project aims to assist rural electric utilities and energy tech startups in mitigating the risks associated with deploying these new technologies. “This project is a powerful example of how generative AI can transform a sector — in this case, the energy sector,” says Veeramachaneni. “In order to be useful, generative AI technologies and their development have to be closely integrated with domain expertise. I am thrilled to be collaborating with experts in grid modeling, and working alongside them to integrate the latest and greatest from my research group and push the boundaries of these technologies.”

    “This project is testament to the power of collaboration and innovation, and we look forward to working with our collaborators to drive positive change in the energy sector,” says Satish Mahajan, principal investigator for the project at Tennessee Tech and a professor of electrical and computer engineering. Tennessee Tech’s Center for Rural Innovation director, Michael Aikens, adds, “Together, we are taking significant steps towards a more sustainable and resilient future for the Appalachian region.” More

  • in

    MIT researchers remotely map crops, field by field

    Crop maps help scientists and policymakers track global food supplies and estimate how they might shift with climate change and growing populations. But getting accurate maps of the types of crops that are grown from farm to farm often requires on-the-ground surveys that only a handful of countries have the resources to maintain.

    Now, MIT engineers have developed a method to quickly and accurately label and map crop types without requiring in-person assessments of every single farm. The team’s method uses a combination of Google Street View images, machine learning, and satellite data to automatically determine the crops grown throughout a region, from one fraction of an acre to the next. 

    The researchers used the technique to automatically generate the first nationwide crop map of Thailand — a smallholder country where small, independent farms make up the predominant form of agriculture. The team created a border-to-border map of Thailand’s four major crops — rice, cassava, sugarcane, and maize — and determined which of the four types was grown, at every 10 meters, and without gaps, across the entire country. The resulting map achieved an accuracy of 93 percent, which the researchers say is comparable to on-the-ground mapping efforts in high-income, big-farm countries.

    The team is applying their mapping technique to other countries such as India, where small farms sustain most of the population but the type of crops grown from farm to farm has historically been poorly recorded.

    “It’s a longstanding gap in knowledge about what is grown around the world,” says Sherrie Wang, the d’Arbeloff Career Development Assistant Professor in MIT’s Department of Mechanical Engineering, and the Institute for Data, Systems, and Society (IDSS). “The final goal is to understand agricultural outcomes like yield, and how to farm more sustainably. One of the key preliminary steps is to map what is even being grown — the more granularly you can map, the more questions you can answer.”

    Wang, along with MIT graduate student Jordi Laguarta Soler and Thomas Friedel of the agtech company PEAT GmbH, will present a paper detailing their mapping method later this month at the AAAI Conference on Artificial Intelligence.

    Ground truth

    Smallholder farms are often run by a single family or farmer, who subsist on the crops and livestock that they raise. It’s estimated that smallholder farms support two-thirds of the world’s rural population and produce 80 percent of the world’s food. Keeping tabs on what is grown and where is essential to tracking and forecasting food supplies around the world. But the majority of these small farms are in low to middle-income countries, where few resources are devoted to keeping track of individual farms’ crop types and yields.

    Crop mapping efforts are mainly carried out in high-income regions such as the United States and Europe, where government agricultural agencies oversee crop surveys and send assessors to farms to label crops from field to field. These “ground truth” labels are then fed into machine-learning models that make connections between the ground labels of actual crops and satellite signals of the same fields. They then label and map wider swaths of farmland that assessors don’t cover but that satellites automatically do.

    “What’s lacking in low- and middle-income countries is this ground label that we can associate with satellite signals,” Laguarta Soler says. “Getting these ground truths to train a model in the first place has been limited in most of the world.”

    The team realized that, while many developing countries do not have the resources to maintain crop surveys, they could potentially use another source of ground data: roadside imagery, captured by services such as Google Street View and Mapillary, which send cars throughout a region to take continuous 360-degree images with dashcams and rooftop cameras.

    In recent years, such services have been able to access low- and middle-income countries. While the goal of these services is not specifically to capture images of crops, the MIT team saw that they could search the roadside images to identify crops.

    Cropped image

    In their new study, the researchers worked with Google Street View (GSV) images taken throughout Thailand — a country that the service has recently imaged fairly thoroughly, and which consists predominantly of smallholder farms.

    Starting with over 200,000 GSV images randomly sampled across Thailand, the team filtered out images that depicted buildings, trees, and general vegetation. About 81,000 images were crop-related. They set aside 2,000 of these, which they sent to an agronomist, who determined and labeled each crop type by eye. They then trained a convolutional neural network to automatically generate crop labels for the other 79,000 images, using various training methods, including iNaturalist — a web-based crowdsourced  biodiversity database, and GPT-4V, a “multimodal large language model” that enables a user to input an image and ask the model to identify what the image is depicting. For each of the 81,000 images, the model generated a label of one of four crops that the image was likely depicting — rice, maize, sugarcane, or cassava.

    The researchers then paired each labeled image with the corresponding satellite data taken of the same location throughout a single growing season. These satellite data include measurements across multiple wavelengths, such as a location’s greenness and its reflectivity (which can be a sign of water). 

    “Each type of crop has a certain signature across these different bands, which changes throughout a growing season,” Laguarta Soler notes.

    The team trained a second model to make associations between a location’s satellite data and its corresponding crop label. They then used this model to process satellite data taken of the rest of the country, where crop labels were not generated or available. From the associations that the model learned, it then assigned crop labels across Thailand, generating a country-wide map of crop types, at a resolution of 10 square meters.

    This first-of-its-kind crop map included locations corresponding to the 2,000 GSV images that the researchers originally set aside, that were labeled by arborists. These human-labeled images were used to validate the map’s labels, and when the team looked to see whether the map’s labels matched the expert, “gold standard” labels, it did so 93 percent of the time.

    “In the U.S., we’re also looking at over 90 percent accuracy, whereas with previous work in India, we’ve only seen 75 percent because ground labels are limited,” Wang says. “Now we can create these labels in a cheap and automated way.”

    The researchers are moving to map crops across India, where roadside images via Google Street View and other services have recently become available.

    “There are over 150 million smallholder farmers in India,” Wang says. “India is covered in agriculture, almost wall-to-wall farms, but very small farms, and historically it’s been very difficult to create maps of India because there are very sparse ground labels.”

    The team is working to generate crop maps in India, which could be used to inform policies having to do with assessing and bolstering yields, as global temperatures and populations rise.

    “What would be interesting would be to create these maps over time,” Wang says. “Then you could start to see trends, and we can try to relate those things to anything like changes in climate and policies.” More

  • in

    Researchers release open-source space debris model

    MIT’s Astrodynamics, Space Robotics, and Controls Laboratory (ARCLab) announced the public beta release of the MIT Orbital Capacity Assessment Tool (MOCAT) during the 2023 Organization for Economic Cooperation and Development (OECD) Space Forum Workshop on Dec. 14. MOCAT enables users to model the long-term future space environment to understand growth in space debris and assess the effectiveness of debris-prevention mechanisms.

    With the escalating congestion in low Earth orbit, driven by a surge in satellite deployments, the risk of collisions and space debris proliferation is a pressing concern. Conducting thorough space environment studies is critical for developing effective strategies for fostering responsible and sustainable use of space resources. 

    MOCAT stands out among orbital modeling tools for its capability to model individual objects, diverse parameters, orbital characteristics, fragmentation scenarios, and collision probabilities. With the ability to differentiate between object categories, generalize parameters, and offer multi-fidelity computations, MOCAT emerges as a versatile and powerful tool for comprehensive space environment analysis and management.

    MOCAT is intended to provide an open-source tool to empower stakeholders including satellite operators, regulators, and members of the public to make data-driven decisions. The ARCLab team has been developing these models for the last several years, recognizing that the lack of open-source implementation of evolutionary modeling tools limits stakeholders’ ability to develop consensus on actions to help improve space sustainability. This beta release is intended to allow users to experiment with the tool and provide feedback to help guide further development.

    Richard Linares, the principal investigator for MOCAT and an MIT associate professor of aeronautics and astronautics, expresses excitement about the tool’s potential impact: “MOCAT represents a significant leap forward in orbital capacity assessment. By making it open-source and publicly available, we hope to engage the global community in advancing our understanding of satellite orbits and contributing to the sustainable use of space.”

    MOCAT consists of two main components. MOCAT-MC evaluates space environment evolution with individual trajectory simulation and Monte Carlo parameter analysis, providing both a high-level overall view for the environment and a fidelity analysis into the individual space objects evolution. MOCAT Source Sink Evolutionary Model (MOCAT-SSEM), meanwhile, uses a lower-fidelity modeling approach that can run on personal computers within seconds to minutes. MOCAT-MC and MOCAT-SSEM can be accessed separately via GitHub.

    MOCAT’s initial development has been supported by the Defense Advanced Research Projects Agency (DARPA) and NASA’s Office of Technology and Strategy.

    “We are thrilled to support this groundbreaking orbital debris modeling work and the new knowledge it created,” says Charity Weeden, associate administrator for the Office of Technology, Policy, and Strategy at NASA headquarters in Washington. “This open-source modeling tool is a public good that will advance space sustainability, improve evidence-based policy analysis, and help all users of space make better decisions.” More

  • in

    Co-creating climate futures with real-time data and spatial storytelling

    Virtual story worlds and game engines aren’t just for video games anymore. They are now tools for scientists and storytellers to digitally twin existing physical spaces and then turn them into vessels to dream up speculative climate stories and build collective designs of the future. That’s the theory and practice behind the MIT WORLDING initiative.

    Twice this year, WORLDING matched world-class climate story teams working in XR (extended reality) with relevant labs and researchers across MIT. One global group returned for a virtual gathering online in partnership with Unity for Humanity, while another met for one weekend in person, hosted at the MIT Media Lab.

    “We are witnessing the birth of an emergent field that fuses climate science, urban planning, real-time 3D engines, nonfiction storytelling, and speculative fiction, and it is all fueled by the urgency of the climate crises,” says Katerina Cizek, lead designer of the WORLDING initiative at the Co-Creation Studio of MIT Open Documentary Lab. “Interdisciplinary teams are forming and blossoming around the planet to collectively imagine and tell stories of healthy, livable worlds in virtual 3D spaces and then finding direct ways to translate that back to earth, literally.”

    At this year’s virtual version of WORLDING, five multidisciplinary teams were selected from an open call. In a week-long series of research and development gatherings, the teams met with MIT scientists, staff, fellows, students, and graduates, as well as other leading figures in the field. Guests ranged from curators at film festivals such as Sundance and Venice, climate policy specialists, and award-winning media creators to software engineers and renowned Earth and atmosphere scientists. The teams heard from MIT scholars in diverse domains, including geomorphology, urban planning as acts of democracy, and climate researchers at MIT Media Lab.

    Mapping climate data

    “We are measuring the Earth’s environment in increasingly data-driven ways. Hundreds of terabytes of data are taken every day about our planet in order to study the Earth as a holistic system, so we can address key questions about global climate change,” explains Rachel Connolly, an MIT Media Lab research scientist focused in the “Future Worlds” research theme, in a talk to the group. “Why is this important for your work and storytelling in general? Having the capacity to understand and leverage this data is critical for those who wish to design for and successfully operate in the dynamic Earth environment.”

    Making sense of billions of data points was a key theme during this year’s sessions. In another talk, Taylor Perron, an MIT professor of Earth, atmospheric and planetary sciences, shared how his team uses computational modeling combined with many other scientific processes to better understand how geology, climate, and life intertwine to shape the surfaces of Earth and other planets. His work resonated with one WORLDING team in particular, one aiming to digitally reconstruct the pre-Hispanic Lake Texcoco — where current day Mexico City is now situated — as a way to contrast and examine the region’s current water crisis.

    Democratizing the future

    While WORLDING approaches rely on rigorous science and the interrogation of large datasets, they are also founded on democratizing community-led approaches.

    MIT Department of Urban Studies and Planning graduate Lafayette Cruise MCP ’19 met with the teams to discuss how he moved his own practice as a trained urban planner to include a futurist component involving participatory methods. “I felt we were asking the same limited questions in regards to the future we were wanting to produce. We’re very limited, very constrained, as to whose values and comforts are being centered. There are so many possibilities for how the future could be.”

    Scaling to reach billions

    This work scales from the very local to massive global populations. Climate policymakers are concerned with reaching billions of people in the line of fire. “We have a goal to reach 1 billion people with climate resilience solutions,” says Nidhi Upadhyaya, deputy director at Atlantic Council’s Adrienne Arsht-Rockefeller Foundation Resilience Center. To get that reach, Upadhyaya is turning to games. “There are 3.3 billion-plus people playing video games across the world. Half of these players are women. This industry is worth $300 billion. Africa is currently among the fastest-growing gaming markets in the world, and 55 percent of the global players are in the Asia Pacific region.” She reminded the group that this conversation is about policy and how formats of mass communication can be used for policymaking, bringing about change, changing behavior, and creating empathy within audiences.

    Socially engaged game development is also connected to education at Unity Technologies, a game engine company. “We brought together our education and social impact work because we really see it as a critical flywheel for our business,” said Jessica Lindl, vice president and global head of social impact/education at Unity Technologies, in the opening talk of WORLDING. “We upscale about 900,000 students, in university and high school programs around the world, and about 800,000 adults who are actively learning and reskilling and upskilling in Unity. Ultimately resulting in our mission of the ‘world is a better place with more creators in it,’ millions of creators who reach billions of consumers — telling the world stories, and fostering a more inclusive, sustainable, and equitable world.”

    Access to these technologies is key, especially the hardware. “Accessibility has been missing in XR,” explains Reginé Gilbert, who studies and teaches accessibility and disability in user experience design at New York University. “XR is being used in artificial intelligence, assistive technology, business, retail, communications, education, empathy, entertainment, recreation, events, gaming, health, rehabilitation meetings, navigation, therapy, training, video programming, virtual assistance wayfinding, and so many other uses. This is a fun fact for folks: 97.8 percent of the world hasn’t tried VR [virtual reality] yet, actually.”

    Meanwhile, new hardware is on its way. The WORLDING group got early insights into the highly anticipated Apple Vision Pro headset, which promises to integrate many forms of XR and personal computing in one device. “They’re really pushing this kind of pass-through or mixed reality,” said Dan Miller, a Unity engineer on the poly spatial team, collaborating with Apple, who described the experience of the device as “You are viewing the real world. You’re pulling up windows, you’re interacting with content. It’s a kind of spatial computing device where you have multiple apps open, whether it’s your email client next to your messaging client with a 3D game in the middle. You’re interacting with all these things in the same space and at different times.”

    “WORLDING combines our passion for social-impact storytelling and incredible innovative storytelling,” said Paisley Smith of the Unity for Humanity Program at Unity Technologies. She added, “This is an opportunity for creators to incubate their game-changing projects and connect with experts across climate, story, and technology.”

    Meeting at MIT

    In a new in-person iteration of WORLDING this year, organizers collaborated closely with Connolly at the MIT Media Lab to co-design an in-person weekend conference Oct. 25 – Nov. 7 with 45 scholars and professionals who visualize climate data at NASA, the National Oceanic and Atmospheric Administration, planetariums, and museums across the United States.

    A participant said of the event, “An incredible workshop that had had a profound effect on my understanding of climate data storytelling and how to combine different components together for a more [holistic] solution.”

    “With this gathering under our new Future Worlds banner,” says Dava Newman, director of the MIT Media Lab and Apollo Program Professor of Astronautics chair, “the Media Lab seeks to affect human behavior and help societies everywhere to improve life here on Earth and in worlds beyond, so that all — the sentient, natural, and cosmic — worlds may flourish.” 

    “WORLDING’s virtual-only component has been our biggest strength because it has enabled a true, international cohort to gather, build, and create together. But this year, an in-person version showed broader opportunities that spatial interactivity generates — informal Q&As, physical worksheets, and larger-scale ideation, all leading to deeper trust-building,” says WORLDING producer Srushti Kamat SM ’23.

    The future and potential of WORLDING lies in the ongoing dialogue between the virtual and physical, both in the work itself and in the format of the workshops. More

  • in

    Smart irrigation technology covers “more crop per drop”

    In agriculture today, robots and drones can monitor fields, temperature and moisture sensors can be automated to meet crop needs, and a host of other systems and devices make farms more efficient, resource-conscious, and profitable. The use of precision agriculture, as these technologies are collectively known, offers significant advantages. However, because the technology can be costly, it remains out of reach for the majority of the world’s farmers.

    “Many of the poor around the world are small, subsistence farmers,” says Susan Amrose, research scientist with the Global Engineering and Research (GEAR) Lab at MIT. “With intensification of food production needs, worsening soil, water scarcity, and smaller plots, these farmers can’t continue with their current practices.”

    By some estimates, the global demand for fresh water will outstrip supply by as much as 40 percent by the end of the decade. Nearly 80 percent of the world’s 570 million farms are classed as smallholder farms, with many located in under-resourced and water-stressed regions. With rapid population growth and climate change driving up demand for food, and with more strain on natural resources, increasing the adoption of sustainable agricultural practices among smallholder farmers is vital. 

    Amrose, who helps lead desalination, drip irrigation, water, and sanitation projects for GEAR Lab, says these small farmers need to move to more mechanized practices. “We’re trying to make it much, much more affordable for farmers to utilize solar-powered irrigation, and to have access to tools that, right now, they’re priced out of,” she says. “More crop per drop, more crop per area, that’s our goal.”

    Play video

    No Drop to Spare: MIT creates affordable, user-driven smart irrigation technology | MIT Mechanical Engineering

    Drip irrigation systems release water and nutrients in controlled volumes directly to the root zone of the crop through a network of pipes and emitters. These systems can reduce water consumption by 20 to 60 percent when compared to conventional flood irrigation methods.

    “Agriculture uses 70 percent of the fresh water that’s in use across the globe. Large-scale adoption and correct management of drip irrigation could help to reduce consumption of fresh water, which is especially critical for regions experiencing water shortages or groundwater depletion,” says Carolyn Sheline SM ’19, a PhD student and member of the GEAR Lab’s Drip Irrigation team. “A lot of irrigation technology is developed for larger farms that can put more money into it — but inexpensive doesn’t need to mean ‘not technologically advanced.’”

    GEAR Lab has created several drip irrigation technology solutions to date, including a low-pressure drip emitter that has been shown to reduce pumping energy by more than 50 percent when compared to existing emitters; a systems-level optimization model that analyzes factors like local weather conditions and crop layouts, to cut overall system operation costs by up to 30 percent; and a low-cost precision irrigation controller that optimizes system energy and water use, enabling farmers to operate the system on an ideal schedule given their specific resources, needs, and preferences. The controller has recently been shown to reduce water consumption by over 40 percent when compared to traditional practices.

    To build these new, affordable technologies, the team tapped into a critical knowledge source — the farmers themselves.

    “We didn’t just create technology in isolation — we also advanced our understanding of how people would interact with and value this technology, and we did that before the technology had come to fruition,” says Amos Winter SM ’05, PhD ’11, associate professor of mechanical engineering and MIT GEAR Lab principal investigator. “Getting affirmations that farmers would value what the technology would do before we finished it was incredibly important.”

    The team held “Farmer Field Days” and conducted interviews with more than 200 farmers, suppliers, and industry professionals in Kenya, Morocco, and Jordan, the regions selected to host field pilot test sites. These specific sites were selected for a variety of reasons, including solar availability and water scarcity, and because all were great candidate markets for eventual adoption of the technology.

    “People usually understand their own problems really well, and they’re very good at coming up with solutions to them,” says Fiona Grant ’17, SM ’19, also a PhD candidate with the GEAR Lab Drip Irrigation team. “As designers, our role really is to provide a different set of expertise and another avenue for them to get the tools or the resources that they need.”

    The controller, for example, takes in weather information, like relative humidity, temperature, wind speed values, and precipitation. Then, using artificial intelligence, it calculates and predicts the area’s solar exposure for the day and the exact irrigation needs for the farmer, and sends information to their smartphone. How much, or how little, automation an individual site uses remains up to the farmer. In its first season of operation on a Moroccan test site, GEAR Lab technology reduced water consumption by 44 percent and energy by 38 percent when compared to a neighboring farm using traditional drip irrigation practice.

    “The way you’re going to operate a system is going to have a big impact on the way you design it,” says Grant. “We gained a sense of what farmers would be willing to change, or not, regarding interactions with the system. We found that what we might change, and what would be acceptable to change, were not necessarily the same thing.”

    GEAR Lab alumna Georgia Van de Zande ’15, SM ’18, PhD ’23, concurs. “It’s about more than just delivering a lower-cost system, it’s also about creating something they’re going to want to use and want to trust.”

    In Jordan, researchers at a full-scale test farm are operating a solar-powered drip system with a prototype of the controller and are receiving smartphone commands on when to open and close the manual valves. In Morocco, the controller is operating at a research farm with a fully automated hydraulic system; researchers are monitoring the irrigation and conducting additional agronomic tasks. In Kenya, where precision agriculture and smart irrigation haven’t yet seen very much adoption, a simpler version of the controller serves to provide educational and training information in addition to offering scheduling and control capabilities.

    Knowledge is power for the farmers, and for designers and engineers, too. If an engineer can know a user’s requirements, Winter says, they’re much more likely to create a successful solution.

    “The most powerful tool a designer can have is perspective. I have one perspective — the math and science and tech innovation side — but I don’t know a thing about what it’s like to live every day as a farmer in Jordan or Morocco,” says Winter. “I don’t know what clogs the filters, or who shuts off the water. If you can see the world through the eyes of stakeholders, you’re going to spot requirements and constraints that you wouldn’t have picked up on otherwise.”

    Winter says the technology his team is building is exciting for a lot of reasons.

    “To be in a situation where the world is saying, ‘we need to deal with water stress, we need to deal with climate adaptation, and we need to particularly do this in resource-constrained countries,’ and to be in a position where we can do something about it and produce something of tremendous value and efficacy is incredible,” says Winter. “Solving the right problem at the right time, on a massive scale, is thrilling.” More

  • in

    Cutting urban carbon emissions by retrofitting buildings

    To support the worldwide struggle to reduce carbon emissions, many cities have made public pledges to cut their carbon emissions in half by 2030, and some have promised to be carbon neutral by 2050. Buildings can be responsible for more than half a municipality’s carbon emissions. Today, new buildings are typically designed in ways that minimize energy use and carbon emissions. So attention focuses on cleaning up existing buildings.

    A decade ago, leaders in some cities took the first step in that process: They quantified their problem. Based on data from their utilities on natural gas and electricity consumption and standard pollutant-emission rates, they calculated how much carbon came from their buildings. They then adopted policies to encourage retrofits, such as adding insulation, switching to double-glazed windows, or installing rooftop solar panels. But will those steps be enough to meet their pledges?

    “In nearly all cases, cities have no clear plan for how they’re going to reach their goal,” says Christoph Reinhart, a professor in the Department of Architecture and director of the Building Technology Program. “That’s where our work comes in. We aim to help them perform analyses so they can say, ‘If we, as a community, do A, B, and C to buildings of a certain type within our jurisdiction, then we are going to get there.’”

    To support those analyses, Reinhart and a team in the MIT Sustainable Design Lab (SDL) — PhD candidate Zachary M. Berzolla SM ’21; former doctoral student Yu Qian Ang PhD ’22, now a research collaborator at the SDL; and former postdoc Samuel Letellier-Duchesne, now a senior building performance analyst at the international building engineering and consulting firm Introba — launched a publicly accessible website providing a series of simulation tools and a process for using them to determine the impacts of planned steps on a specific building stock. Says Reinhart: “The takeaway can be a clear technology pathway — a combination of building upgrades, renewable energy deployments, and other measures that will enable a community to reach its carbon-reduction goals for their built environment.”

    Analyses performed in collaboration with policymakers from selected cities around the world yielded insights demonstrating that reaching current goals will require more effort than city representatives and — in a few cases — even the research team had anticipated.

    Exploring carbon-reduction pathways

    The researchers’ approach builds on a physics-based “building energy model,” or BEM, akin to those that architects use to design high-performance green buildings. In 2013, Reinhart and his team developed a method of extending that concept to analyze a cluster of buildings. Based on publicly available geographic information system (GIS) data, including each building’s type, footprint, and year of construction, the method defines the neighborhood — including trees, parks, and so on — and then, using meteorological data, how the buildings will interact, the airflows among them, and their energy use. The result is an “urban building energy model,” or UBEM, for a neighborhood or a whole city.

    The website developed by the MIT team enables neighborhoods and cities to develop their own UBEM and to use it to calculate their current building energy use and resulting carbon emissions, and then how those outcomes would change assuming different retrofit programs or other measures being implemented or considered. “The website — UBEM.io — provides step-by-step instructions and all the simulation tools that a team will need to perform an analysis,” says Reinhart.

    The website starts by describing three roles required to perform an analysis: a local sustainability champion who is familiar with the municipality’s carbon-reduction efforts; a GIS manager who has access to the municipality’s urban datasets and maintains a digital model of the built environment; and an energy modeler — typically a hired consultant — who has a background in green building consulting and individual building energy modeling.

    The team begins by defining “shallow” and “deep” building retrofit scenarios. To explain, Reinhart offers some examples: “‘Shallow’ refers to things that just happen, like when you replace your old, failing appliances with new, energy-efficient ones, or you install LED light bulbs and weatherstripping everywhere,” he says. “‘Deep’ adds to that list things you might do only every 20 years, such as ripping out walls and putting in insulation or replacing your gas furnace with an electric heat pump.”

    Once those scenarios are defined, the GIS manager uploads to UBEM.io a dataset of information about the city’s buildings, including their locations and attributes such as geometry, height, age, and use (e.g., commercial, retail, residential). The energy modeler then builds a UBEM to calculate the energy use and carbon emissions of the existing building stock. Once that baseline is established, the energy modeler can calculate how specific retrofit measures will change the outcomes.

    Workshop to test-drive the method

    Two years ago, the MIT team set up a three-day workshop to test the website with sample users. Participants included policymakers from eight cities and municipalities around the world: namely, Braga (Portugal), Cairo (Egypt), Dublin (Ireland), Florianopolis (Brazil), Kiel (Germany), Middlebury (Vermont, United States), Montreal (Canada), and Singapore. Taken together, the cities represent a wide range of climates, socioeconomic demographics, cultures, governing structures, and sizes.

    Working with the MIT team, the participants presented their goals, defined shallow- and deep-retrofit scenarios for their city, and selected a limited but representative area for analysis — an approach that would speed up analyses of different options while also generating results valid for the city as a whole.

    They then performed analyses to quantify the impacts of their retrofit scenarios. Finally, they learned how best to present their findings — a critical part of the exercise. “When you do this analysis and bring it back to the people, you can say, ‘This is our homework over the next 30 years. If we do this, we’re going to get there,’” says Reinhart. “That makes you part of the community, so it’s a joint goal.”

    Sample results

    After the close of the workshop, Reinhart and his team confirmed their findings for each city and then added one more factor to the analyses: the state of the city’s electric grid. Several cities in the study had pledged to make their grid carbon-neutral by 2050. Including the grid in the analysis was therefore critical: If a building becomes all-electric and purchases its electricity from a carbon-free grid, then that building will be carbon neutral — even with no on-site energy-saving retrofits.

    The final analysis for each city therefore calculated the total kilograms of carbon dioxide equivalent emitted per square meter of floor space assuming the following scenarios: the baseline; shallow retrofit only; shallow retrofit plus a clean electricity grid; deep retrofit only; deep retrofit plus rooftop photovoltaic solar panels; and deep retrofit plus a clean electricity grid. (Note that “clean electricity grid” is based on the area’s most ambitious decarbonization target for their power grid.)

    The following paragraphs provide highlights of the analyses for three of the eight cities. Included are the city’s setting, emission-reduction goals, current and proposed measures, and calculations of how implementation of those measures would affect their energy use and carbon emissions.

    Singapore

    Singapore is generally hot and humid, and its building energy use is largely in the form of electricity for cooling. The city is dominated by high-rise buildings, so there’s not much space for rooftop solar installations to generate the needed electricity. Therefore, plans for decarbonizing the current building stock must involve retrofits. The shallow-retrofit scenario focuses on installing energy-efficient lighting and appliances. To those steps, the deep-retrofit scenario adds adopting a district cooling system. Singapore’s stated goals are to cut the baseline carbon emissions by about a third by 2030 and to cut it in half by 2050.

    The analysis shows that, with just the shallow retrofits, Singapore won’t achieve its 2030 goal. But with the deep retrofits, it should come close. Notably, decarbonizing the electric grid would enable Singapore to meet and substantially exceed its 2050 target assuming either retrofit scenario.

    Dublin

    Dublin has a mild climate with relatively comfortable summers but cold, humid winters. As a result, the city’s energy use is dominated by fossil fuels, in particular, natural gas for space heating and domestic hot water. The city presented just one target — a 40 percent reduction by 2030.

    Dublin has many neighborhoods made up of Georgian row houses, and, at the time of the workshop, the city already had a program in place encouraging groups of owners to insulate their walls. The shallow-retrofit scenario therefore focuses on weatherization upgrades (adding weatherstripping to windows and doors, insulating crawlspaces, and so on). To that list, the deep-retrofit scenario adds insulating walls and installing upgraded windows. The participants didn’t include electric heat pumps, as the city was then assessing the feasibility of expanding the existing district heating system.

    Results of the analyses show that implementing the shallow-retrofit scenario won’t enable Dublin to meet its 2030 target. But the deep-retrofit scenario will. However, like Singapore, Dublin could make major gains by decarbonizing its electric grid. The analysis shows that a decarbonized grid — with or without the addition of rooftop solar panels where possible — could more than halve the carbon emissions that remain in the deep-retrofit scenario. Indeed, a decarbonized grid plus electrification of the heating system by incorporating heat pumps could enable Dublin to meet a future net-zero target.

    Middlebury

    Middlebury, Vermont, has warm, wet summers and frigid winters. Like Dublin, its energy demand is dominated by natural gas for heating. But unlike Dublin, it already has a largely decarbonized electric grid with a high penetration of renewables.

    For the analysis, the Middlebury team chose to focus on an aging residential neighborhood similar to many that surround the city core. The shallow-retrofit scenario calls for installing heat pumps for space heating, and the deep-retrofit scenario adds improvements in building envelopes (the façade, roof, and windows). The town’s targets are a 40 percent reduction from the baseline by 2030 and net-zero carbon by 2050.

    Results of the analyses showed that implementing the shallow-retrofit scenario won’t achieve the 2030 target. The deep-retrofit scenario would get the city to the 2030 target but not to the 2050 target. Indeed, even with the deep retrofits, fossil fuel use remains high. The explanation? While both retrofit scenarios call for installing heat pumps for space heating, the city would continue to use natural gas to heat its hot water.

    Lessons learned

    For several policymakers, seeing the results of their analyses was a wake-up call. They learned that the strategies they had planned might not be sufficient to meet their stated goals — an outcome that could prove publicly embarrassing for them in the future.

    Like the policymakers, the researchers learned from the experience. Reinhart notes three main takeaways.

    First, he and his team were surprised to find how much of a building’s energy use and carbon emissions can be traced to domestic hot water. With Middlebury, for example, even switching from natural gas to heat pumps for space heating didn’t yield the expected effect: On the bar graphs generated by their analyses, the gray bars indicating carbon from fossil fuel use remained. As Reinhart recalls, “I kept saying, ‘What’s all this gray?’” While the policymakers talked about using heat pumps, they were still going to use natural gas to heat their hot water. “It’s just stunning that hot water is such a big-ticket item. It’s huge,” says Reinhart.

    Second, the results demonstrate the importance of including the state of the local electric grid in this type of analysis. “Looking at the results, it’s clear that if we want to have a successful energy transition, the building sector and the electric grid sector both have to do their homework,” notes Reinhart. Moreover, in many cases, reaching carbon neutrality by 2050 would require not only a carbon-free grid but also all-electric buildings.

    Third, Reinhart was struck by how different the bar graphs presenting results for the eight cities look. “This really celebrates the uniqueness of different parts of the world,” he says. “The physics used in the analysis is the same everywhere, but differences in the climate, the building stock, construction practices, electric grids, and other factors make the consequences of making the same change vary widely.”

    In addition, says Reinhart, “there are sometimes deeply ingrained conflicts of interest and cultural norms, which is why you cannot just say everybody should do this and do this.” For instance, in one case, the city owned both the utility and the natural gas it burned. As a result, the policymakers didn’t consider putting in heat pumps because “the natural gas was a significant source of municipal income, and they didn’t want to give that up,” explains Reinhart.

    Finally, the analyses quantified two other important measures: energy use and “peak load,” which is the maximum electricity demanded from the grid over a specific time period. Reinhart says that energy use “is probably mostly a plausibility check. Does this make sense?” And peak load is important because the utilities need to keep a stable grid.

    Middlebury’s analysis provides an interesting look at how certain measures could influence peak electricity demand. There, the introduction of electric heat pumps for space heating more than doubles the peak demand from buildings, suggesting that substantial additional capacity would have to be added to the grid in that region. But when heat pumps are combined with other retrofitting measures, the peak demand drops to levels lower than the starting baseline.

    The aftermath: An update

    Reinhart stresses that the specific results from the workshop provide just a snapshot in time; that is, where the cities were at the time of the workshop. “This is not the fate of the city,” he says. “If we were to do the same exercise today, we’d no doubt see a change in thinking, and the outcomes would be different.”

    For example, heat pumps are now familiar technology and have demonstrated their ability to handle even bitterly cold climates. And in some regions, they’ve become economically attractive, as the war in Ukraine has made natural gas both scarce and expensive. Also, there’s now awareness of the need to deal with hot water production.

    Reinhart notes that performing the analyses at the workshop did have the intended impact: It brought about change. Two years after the project had ended, most of the cities reported that they had implemented new policy measures or had expanded their analysis across their entire building stock. “That’s exactly what we want,” comments Reinhart. “This is not an academic exercise. It’s meant to change what people focus on and what they do.”

    Designing policies with socioeconomics in mind

    Reinhart notes a key limitation of the UBEM.io approach: It looks only at technical feasibility. But will the building owners be willing and able to make the energy-saving retrofits? Data show that — even with today’s incentive programs and subsidies — current adoption rates are only about 1 percent. “That’s way too low to enable a city to achieve its emission-reduction goals in 30 years,” says Reinhart. “We need to take into account the socioeconomic realities of the residents to design policies that are both effective and equitable.”

    To that end, the MIT team extended their UBEM.io approach to create a socio-techno-economic analysis framework that can predict the rate of retrofit adoption throughout a city. Based on census data, the framework creates a UBEM that includes demographics for the specific types of buildings in a city. Accounting for the cost of making a specific retrofit plus financial benefits from policy incentives and future energy savings, the model determines the economic viability of the retrofit package for representative households.

    Sample analyses for two Boston neighborhoods suggest that high-income households are largely ineligible for need-based incentives or the incentives are insufficient to prompt action. Lower-income households are eligible and could benefit financially over time, but they don’t act, perhaps due to limited access to information, a lack of time or capital, or a variety of other reasons.

    Reinhart notes that their work thus far “is mainly looking at technical feasibility. Next steps are to better understand occupants’ willingness to pay, and then to determine what set of federal and local incentive programs will trigger households across the demographic spectrum to retrofit their apartments and houses, helping the worldwide effort to reduce carbon emissions.”

    This work was supported by Shell through the MIT Energy Initiative. Zachary Berzolla was supported by the U.S. National Science Foundation Graduate Research Fellowship. Samuel Letellier-Duchesne was supported by the postdoctoral fellowship of the Natural Sciences and Engineering Research Council of Canada.

    This article appears in the Spring 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    A new mathematical “blueprint” is accelerating fusion device development

    Developing commercial fusion energy requires scientists to understand sustained processes that have never before existed on Earth. But with so many unknowns, how do we make sure we’re designing a device that can successfully harness fusion power?

    We can fill gaps in our understanding using computational tools like algorithms and data simulations to knit together experimental data and theory, which allows us to optimize fusion device designs before they’re built, saving much time and resources.

    Currently, classical supercomputers are used to run simulations of plasma physics and fusion energy scenarios, but to address the many design and operating challenges that still remain, more powerful computers are a necessity, and of great interest to plasma researchers and physicists.

    Quantum computers’ exponentially faster computing speeds have offered plasma and fusion scientists the tantalizing possibility of vastly accelerated fusion device development. Quantum computers could reconcile a fusion device’s many design parameters — for example, vessel shape, magnet spacing, and component placement — at a greater level of detail, while also completing the tasks faster. However, upgrading to a quantum computer is no simple task.

    In a paper, “Dyson maps and unitary evolution for Maxwell equations in tensor dielectric media,” recently published in Physics Review A, Abhay K. Ram, a research scientist at the MIT Plasma Science and Fusion Center (PSFC), and his co-authors Efstratios Koukoutsis, Kyriakos Hizanidis, and George Vahala present a framework that would facilitate the use of quantum computers to study electromagnetic waves in plasma and its manipulation in magnetic confinement fusion devices.

    Quantum computers excel at simulating quantum physics phenomena, but many topics in plasma physics are predicated on the classical physics model. A plasma (which is the “dielectric media” referenced in the paper’s title) consists of many particles — electrons and ions — the collective behaviors of which are effectively described using classic statistical physics. In contrast, quantum effects that influence atomic and subatomic scales are averaged out in classical plasma physics.  

    Furthermore, the descriptive limitations of quantum mechanics aren’t suited to plasma. In a fusion device, plasmas are heated and manipulated using electromagnetic waves, which are one of the most important and ubiquitous occurrences in the universe. The behaviors of electromagnetic waves, including how waves are formed and interact with their surroundings, are described by Maxwell’s equations — a foundational component of classical plasma physics, and of general physics as well. The standard form of Maxwell’s equations is not expressed in “quantum terms,” however, so implementing the equations on a quantum computer is like fitting a square peg in a round hole: it doesn’t work.

    Consequently, for plasma physicists to take advantage of quantum computing’s power for solving problems, classical physics must be translated into the language of quantum mechanics. The researchers tackled this translational challenge, and in their paper, they reveal that a Dyson map can bridge the translational divide between classical physics and quantum mechanics. Maps are mathematical functions that demonstrate how to take an input from one kind of space and transform it to an output that is meaningful in a different kind of space. In the case of Maxwell’s equations, a Dyson map allows classical electromagnetic waves to be studied in the space utilized by quantum computers. In essence, it reconfigures the square peg so it will fit into the round hole without compromising any physics.

    The work also gives a blueprint of a quantum circuit encoded with equations expressed in quantum bits (“qubits”) rather than classical bits so the equations may be used on quantum computers. Most importantly, these blueprints can be coded and tested on classical computers.

    “For years we have been studying wave phenomena in plasma physics and fusion energy science using classical techniques. Quantum computing and quantum information science is challenging us to step out of our comfort zone, thereby ensuring that I have not ‘become comfortably numb,’” says Ram, quoting a Pink Floyd song.

    The paper’s Dyson map and circuits have put quantum computing power within reach, fast-tracking an improved understanding of plasmas and electromagnetic waves, and putting us that much closer to the ideal fusion device design.    More