More stories

  • in

    Making climate models relevant for local decision-makers

    Climate models are a key technology in predicting the impacts of climate change. By running simulations of the Earth’s climate, scientists and policymakers can estimate conditions like sea level rise, flooding, and rising temperatures, and make decisions about how to appropriately respond. But current climate models struggle to provide this information quickly or affordably enough to be useful on smaller scales, such as the size of a city. Now, authors of a new open-access paper published in the Journal of Advances in Modeling Earth Systems have found a method to leverage machine learning to utilize the benefits of current climate models, while reducing the computational costs needed to run them. “It turns the traditional wisdom on its head,” says Sai Ravela, a principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) who wrote the paper with EAPS postdoc Anamitra Saha. Traditional wisdomIn climate modeling, downscaling is the process of using a global climate model with coarse resolution to generate finer details over smaller regions. Imagine a digital picture: A global model is a large picture of the world with a low number of pixels. To downscale, you zoom in on just the section of the photo you want to look at — for example, Boston. But because the original picture was low resolution, the new version is blurry; it doesn’t give enough detail to be particularly useful. “If you go from coarse resolution to fine resolution, you have to add information somehow,” explains Saha. Downscaling attempts to add that information back in by filling in the missing pixels. “That addition of information can happen two ways: Either it can come from theory, or it can come from data.” Conventional downscaling often involves using models built on physics (such as the process of air rising, cooling, and condensing, or the landscape of the area), and supplementing it with statistical data taken from historical observations. But this method is computationally taxing: It takes a lot of time and computing power to run, while also being expensive. A little bit of both In their new paper, Saha and Ravela have figured out a way to add the data another way. They’ve employed a technique in machine learning called adversarial learning. It uses two machines: One generates data to go into our photo. But the other machine judges the sample by comparing it to actual data. If it thinks the image is fake, then the first machine has to try again until it convinces the second machine. The end-goal of the process is to create super-resolution data. Using machine learning techniques like adversarial learning is not a new idea in climate modeling; where it currently struggles is its inability to handle large amounts of basic physics, like conservation laws. The researchers discovered that simplifying the physics going in and supplementing it with statistics from the historical data was enough to generate the results they needed. “If you augment machine learning with some information from the statistics and simplified physics both, then suddenly, it’s magical,” says Ravela. He and Saha started with estimating extreme rainfall amounts by removing more complex physics equations and focusing on water vapor and land topography. They then generated general rainfall patterns for mountainous Denver and flat Chicago alike, applying historical accounts to correct the output. “It’s giving us extremes, like the physics does, at a much lower cost. And it’s giving us similar speeds to statistics, but at much higher resolution.” Another unexpected benefit of the results was how little training data was needed. “The fact that that only a little bit of physics and little bit of statistics was enough to improve the performance of the ML [machine learning] model … was actually not obvious from the beginning,” says Saha. It only takes a few hours to train, and can produce results in minutes, an improvement over the months other models take to run. Quantifying risk quicklyBeing able to run the models quickly and often is a key requirement for stakeholders such as insurance companies and local policymakers. Ravela gives the example of Bangladesh: By seeing how extreme weather events will impact the country, decisions about what crops should be grown or where populations should migrate to can be made considering a very broad range of conditions and uncertainties as soon as possible.“We can’t wait months or years to be able to quantify this risk,” he says. “You need to look out way into the future and at a large number of uncertainties to be able to say what might be a good decision.”While the current model only looks at extreme precipitation, training it to examine other critical events, such as tropical storms, winds, and temperature, is the next step of the project. With a more robust model, Ravela is hoping to apply it to other places like Boston and Puerto Rico as part of a Climate Grand Challenges project.“We’re very excited both by the methodology that we put together, as well as the potential applications that it could lead to,” he says.  More

  • in

    School of Engineering welcomes new faculty

    The School of Engineering welcomes 15 new faculty members across six of its academic departments. This new cohort of faculty members, who have either recently started their roles at MIT or will start within the next year, conduct research across a diverse range of disciplines.Many of these new faculty specialize in research that intersects with multiple fields. In addition to positions in the School of Engineering, a number of these faculty have positions at other units across MIT. Faculty with appointments in the Department of Electrical Engineering and Computer Science (EECS) report into both the School of Engineering and the MIT Stephen A. Schwarzman College of Computing. This year, new faculty also have joint appointments between the School of Engineering and the School of Humanities, Arts, and Social Sciences and the School of Science.“I am delighted to welcome this cohort of talented new faculty to the School of Engineering,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and Vannevar Bush Professor of Electrical Engineering and Computer Science. “I am particularly struck by the interdisciplinary approach many of these new faculty take in their research. They are working in areas that are poised to have tremendous impact. I look forward to seeing them grow as researchers and educators.”The new engineering faculty include:Stephen Bates joined the Department of Electrical Engineering and Computer Science as an assistant professor in September 2023. He is also a member of the Laboratory for Information and Decision Systems (LIDS). Bates uses data and AI for reliable decision-making in the presence of uncertainty. In particular, he develops tools for statistical inference with AI models, data impacted by strategic behavior, and settings with distribution shift. Bates also works on applications in life sciences and sustainability. He previously worked as a postdoc in the Statistics and EECS departments at the University of California at Berkeley (UC Berkeley). Bates received a BS in statistics and mathematics at Harvard University and a PhD from Stanford University.Abigail Bodner joined the Department of EECS and Department of Earth, Atmospheric and Planetary Sciences as an assistant professor in January. She is also a member of the LIDS. Bodner’s research interests span climate, physical oceanography, geophysical fluid dynamics, and turbulence. Previously, she worked as a Simons Junior Fellow at the Courant Institute of Mathematical Sciences at New York University. Bodner received her BS in geophysics and mathematics and MS in geophysics from Tel Aviv University, and her SM in applied mathematics and PhD from Brown University.Andreea Bobu ’17 will join the Department of Aeronautics and Astronautics as an assistant professor in July. Her research sits at the intersection of robotics, mathematical human modeling, and deep learning. Previously, she was a research scientist at the Boston Dynamics AI Institute, focusing on how robots and humans can efficiently arrive at shared representations of their tasks for more seamless and reliable interactions. Bobu earned a BS in computer science and engineering from MIT and a PhD in electrical engineering and computer science from UC Berkeley.Suraj Cheema will join the Department of Materials Science and Engineering, with a joint appointment in the Department of EECS, as an assistant professor in July. His research explores atomic-scale engineering of electronic materials to tackle challenges related to energy consumption, storage, and generation, aiming for more sustainable microelectronics. This spans computing and energy technologies via integrated ferroelectric devices. He previously worked as a postdoc at UC Berkeley. Cheema earned a BS in applied physics and applied mathematics from Columbia University and a PhD in materials science and engineering from UC Berkeley.Samantha Coday joins the Department of EECS as an assistant professor in July. She will also be a member of the MIT Research Laboratory of Electronics. Her research interests include ultra-dense power converters enabling renewable energy integration, hybrid electric aircraft and future space exploration. To enable high-performance converters for these critical applications her research focuses on the optimization, design, and control of hybrid switched-capacitor converters. Coday earned a BS in electrical engineering and mathematics from Southern Methodist University and an MS and a PhD in electrical engineering and computer science from UC Berkeley.Mitchell Gordon will join the Department of EECS as an assistant professor in July. He will also be a member of the MIT Computer Science and Artificial Intelligence Laboratory. In his research, Gordon designs interactive systems and evaluation approaches that bridge principles of human-computer interaction with the realities of machine learning. He currently works as a postdoc at the University of Washington. Gordon received a BS from the University of Rochester, and MS and PhD from Stanford University, all in computer science.Kaiming He joined the Department of EECS as an associate professor in February. He will also be a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). His research interests cover a wide range of topics in computer vision and deep learning. He is currently focused on building computer models that can learn representations and develop intelligence from and for the complex world. Long term, he hopes to augment human intelligence with improved artificial intelligence. Before joining MIT, He was a research scientist at Facebook AI. He earned a BS from Tsinghua University and a PhD from the Chinese University of Hong Kong.Anna Huang SM ’08 will join the departments of EECS and Music and Theater Arts as assistant professor in September. She will help develop graduate programming focused on music technology. Previously, she spent eight years with Magenta at Google Brain and DeepMind, spearheading efforts in generative modeling, reinforcement learning, and human-computer interaction to support human-AI partnerships in music-making. She is the creator of Music Transformer and Coconet (which powered the Bach Google Doodle). She was a judge and organizer for the AI Song Contest. Anna holds a Canada CIFAR AI Chair at Mila, a BM in music composition, and BS in computer science from the University of Southern California, an MS from the MIT Media Lab, and a PhD from Harvard University.Yael Kalai PhD ’06 will join the Department of EECS as a professor in September. She is also a member of CSAIL. Her research interests include cryptography, the theory of computation, and security and privacy. Kalai currently focuses on both the theoretical and real-world applications of cryptography, including work on succinct and easily verifiable non-interactive proofs. She received her bachelor’s degree from the Hebrew University of Jerusalem, a master’s degree at the Weizmann Institute of Science, and a PhD from MIT.Sendhil Mullainathan will join the departments of EECS and Economics as a professor in July. His research uses machine learning to understand complex problems in human behavior, social policy, and medicine. Previously, Mullainathan spent five years at MIT before joining the faculty at Harvard in 2004, and then the University of Chicago in 2018. He received his BA in computer science, mathematics, and economics from Cornell University and his PhD from Harvard University.Alex Rives will join the Department of EECS as an assistant professor in September, with a core membership in the Broad Institute of MIT and Harvard. In his research, Rives is focused on AI for scientific understanding, discovery, and design for biology. Rives worked with Meta as a New York University graduate student, where he founded and led the Evolutionary Scale Modeling team that developed large language models for proteins. Rives received his BS in philosophy and biology from Yale University and is completing his PhD in computer science at NYU.Sungho Shin will join the Department of Chemical Engineering as an assistant professor in July. His research interests include control theory, optimization algorithms, high-performance computing, and their applications to decision-making in complex systems, such as energy infrastructures. Shin is a postdoc at the Mathematics and Computer Science Division at Argonne National Laboratory. He received a BS in mathematics and chemical engineering from Seoul National University and a PhD in chemical engineering from the University of Wisconsin-Madison.Jessica Stark joined the Department of Biological Engineering as an assistant professor in January. In her research, Stark is developing technologies to realize the largely untapped potential of cell-surface sugars, called glycans, for immunological discovery and immunotherapy. Previously, Stark was an American Cancer Society postdoc at Stanford University. She earned a BS in chemical and biomolecular engineering from Cornell University and a PhD in chemical and biological engineering at Northwestern University.Thomas John “T.J.” Wallin joined the Department of Materials Science and Engineering as an assistant professor in January. As a researcher, Wallin’s interests lay in advanced manufacturing of functional soft matter, with an emphasis on soft wearable technologies and their applications in human-computer interfaces. Previously, he was a research scientist at Meta’s Reality Labs Research working in their haptic interaction team. Wallin earned a BS in physics and chemistry from the College of William and Mary, and an MS and PhD in materials science and engineering from Cornell University.Gioele Zardini joined the Department of Civil and Environmental Engineering as an assistant professor in September. He will also join LIDS and the Institute for Data, Systems, and Society. Driven by societal challenges, Zardini’s research interests include the co-design of sociotechnical systems, compositionality in engineering, applied category theory, decision and control, optimization, and game theory, with society-critical applications to intelligent transportation systems, autonomy, and complex networks and infrastructures. He received his BS, MS, and PhD in mechanical engineering with a focus on robotics, systems, and control from ETH Zurich, and spent time at MIT, Stanford University, and Motional. More

  • in

    HPI-MIT design research collaboration creates powerful teams

    The recent ransomware attack on ChangeHealthcare, which severed the network connecting health care providers, pharmacies, and hospitals with health insurance companies, demonstrates just how disruptive supply chain attacks can be. In this case, it hindered the ability of those providing medical services to submit insurance claims and receive payments.This sort of attack and other forms of data theft are becoming increasingly common and often target large, multinational corporations through the small and mid-sized vendors in their corporate supply chains, enabling breaks in these enormous systems of interwoven companies.Cybersecurity researchers at MIT and the Hasso Plattner Institute (HPI) in Potsdam, Germany, are focused on the different organizational security cultures that exist within large corporations and their vendors because it’s that difference that creates vulnerabilities, often due to the lack of emphasis on cybersecurity by the senior leadership in these small to medium-sized enterprises (SMEs).Keri Pearlson, executive director of Cybersecurity at MIT Sloan (CAMS); Jillian Kwong, a research scientist at CAMS; and Christian Doerr, a professor of cybersecurity and enterprise security at HPI, are co-principal investigators (PIs) on the research project, “Culture and the Supply Chain: Transmitting Shared Values, Attitudes and Beliefs across Cybersecurity Supply Chains.”Their project was selected in the 2023 inaugural round of grants from the HPI-MIT Designing for Sustainability program, a multiyear partnership funded by HPI and administered by the MIT Morningside Academy for Design (MAD). The program awards about 10 grants annually of up to $200,000 each to multidisciplinary teams with divergent backgrounds in computer science, artificial intelligence, machine learning, engineering, design, architecture, the natural sciences, humanities, and business and management. The 2024 Call for Applications is open through June 3.Designing for Sustainability grants support scientific research that promotes the United Nations’ Sustainable Development Goals (SDGs) on topics involving sustainable design, innovation, and digital technologies, with teams made up of PIs from both institutions. The PIs on these projects, who have common interests but different strengths, create more powerful teams by working together.Transmitting shared values, attitudes, and beliefs to improve cybersecurity across supply chainsThe MIT and HPI cybersecurity researchers say that most ransomware attacks aren’t reported. Smaller companies hit with ransomware attacks just shut down, because they can’t afford the payment to retrieve their data. This makes it difficult to know just how many attacks and data breaches occur. “As more data and processes move online and into the cloud, it becomes even more important to focus on securing supply chains,” Kwong says. “Investing in cybersecurity allows information to be exchanged freely while keeping data safe. Without it, any progress towards sustainability is stalled.”One of the first large data breaches in the United States to be widely publicized provides a clear example of how an SME cybersecurity can leave a multinational corporation vulnerable to attack. In 2013, hackers entered the Target Corporation’s own network by obtaining the credentials of a small vendor in its supply chain: a Pennsylvania HVAC company. Through that breach, thieves were able to install malware that stole the financial and personal information of 110 million Target customers, which they sold to card shops on the black market.To prevent such attacks, SME vendors in a large corporation’s supply chain are required to agree to follow certain security measures, but the SMEs usually don’t have the expertise or training to make good on these cybersecurity promises, leaving their own systems, and therefore any connected to them, vulnerable to attack.“Right now, organizations are connected economically, but not aligned in terms of organizational culture, values, beliefs, and practices around cybersecurity,” explains Kwong. “Basically, the big companies are realizing the smaller ones are not able to implement all the cybersecurity requirements. We have seen some larger companies address this by reducing requirements or making the process shorter. However, this doesn’t mean companies are more secure; it just lowers the bar for the smaller suppliers to clear it.”Pearlson emphasizes the importance of board members and senior management taking responsibility for cybersecurity in order to change the culture at SMEs, rather than pushing that down to a single department, IT office, or in some cases, one IT employee.The research team is using case studies based on interviews, field studies, focus groups, and direct observation of people in their natural work environments to learn how companies engage with vendors, and the specific ways cybersecurity is implemented, or not, in everyday operations. The goal is to create a shared culture around cybersecurity that can be adopted correctly by all vendors in a supply chain.This approach is in line with the goals of the Charter of Trust Initiative, a partnership of large, multinational corporations formed to establish a better means of implementing cybersecurity in the supply chain network. The HPI-MIT team worked with companies from the Charter of Trust and others last year to understand the impacts of cybersecurity regulation on SME participation in supply chains and develop a conceptual framework to implement changes for stabilizing supply chains.Cybersecurity is a prerequisite needed to achieve any of the United Nations’ SDGs, explains Kwong. Without secure supply chains, access to key resources and institutions can be abruptly cut off. This could include food, clean water and sanitation, renewable energy, financial systems, health care, education, and resilient infrastructure. Securing supply chains helps enable progress on all SDGs, and the HPI-MIT project specifically supports SMEs, which are a pillar of the U.S. and European economies.Personalizing product designs while minimizing material wasteIn a vastly different Designing for Sustainability joint research project that employs AI with engineering, “Personalizing Product Designs While Minimizing Material Waste” will use AI design software to lay out multiple parts of a pattern on a sheet of plywood, acrylic, or other material, so that they can be laser cut to create new products in real time without wasting material.Stefanie Mueller, the TIBCO Career Development Associate Professor in the MIT Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory, and Patrick Baudisch, a professor of computer science and chair of the Human Computer Interaction Lab at HPI, are co-PIs on the project. The two have worked together for years; Baudisch was Mueller’s PhD research advisor at HPI.Baudisch’s lab developed an online design teaching system called Kyub that lets students design 3D objects in pieces that are laser cut from sheets of wood and assembled to become chairs, speaker boxes, radio-controlled aircraft, or even functional musical instruments. For instance, each leg of a chair would consist of four identical vertical pieces attached at the edges to create a hollow-centered column, four of which will provide stability to the chair, even though the material is very lightweight.“By designing and constructing such furniture, students learn not only design, but also structural engineering,” Baudisch says. “Similarly, by designing and constructing musical instruments, they learn about structural engineering, as well as resonance, types of musical tuning, etc.”Mueller was at HPI when Baudisch developed the Kyub software, allowing her to observe “how they were developing and making all the design decisions,” she says. “They built a really neat piece for people to quickly design these types of 3D objects.” However, using Kyub for material-efficient design is not fast; in order to fabricate a model, the software has to break the 3D models down into 2D parts and lay these out on sheets of material. This takes time, and makes it difficult to see the impact of design decisions on material use in real-time.Mueller’s lab at MIT developed software based on a layout algorithm that uses AI to lay out pieces on sheets of material in real time. This allows AI to explore multiple potential layouts while the user is still editing, and thus provide ongoing feedback. “As the user develops their design, Fabricaide decides good placements of parts onto the user’s available materials, provides warnings if the user does not have enough material for a design, and makes suggestions for how the user can resolve insufficient material cases,” according to the project website.The joint MIT-HPI project integrates Mueller’s AI software with Baudisch’s Kyub software and adds machine learning to train the AI to offer better design suggestions that save material while adhering to the user’s design intent.“The project is all about minimizing the waste on these materials sheets,” Mueller says. She already envisions the next step in this AI design process: determining how to integrate the laws of physics into the AI’s knowledge base to ensure the structural integrity and stability of objects it designs.AI-powered startup design for the Anthropocene: Providing guidance for novel enterprisesThrough her work with the teams of MITdesignX and its international programs, Svafa Grönfeldt, faculty director of MITdesignX and professor of the practice in MIT MAD, has helped scores of people in startup companies use the tools and methods of design to ensure that the solution a startup proposes actually fits the problem it seeks to solve. This is often called the problem-solution fit.Grönfeldt and MIT postdoc Norhan Bayomi are now extending this work to incorporate AI into the process, in collaboration with MIT Professor John Fernández and graduate student Tyler Kim. The HPI team includes Professor Gerard de Melo; HPI School of Entrepreneurship Director Frank Pawlitschek; and doctoral student Michael Mansfeld.“The startup ecosystem is characterized by uncertainty and volatility compounded by growing uncertainties in climate and planetary systems,” Grönfeldt says. “Therefore, there is an urgent need for a robust model that can objectively predict startup success and guide design for the Anthropocene.”While startup-success forecasting is gaining popularity, it currently focuses on aiding venture capitalists in selecting companies to fund, rather than guiding the startups in the design of their products, services and business plans.“The coupling of climate and environmental priorities with startup agendas requires deeper analytics for effective enterprise design,” Grönfeldt says. The project aims to explore whether AI-augmented decision-support systems can enhance startup-success forecasting.“We’re trying to develop a machine learning approach that will give a forecasting of probability of success based on a number of parameters, including the type of business model proposed, how the team came together, the team members’ backgrounds and skill sets, the market and industry sector they’re working in and the problem-solution fit,” says Bayomi, who works with Fernández in the MIT Environmental Solutions Initiative. The two are co-founders of the startup Lamarr.AI, which employs robotics and AI to help reduce the carbon dioxide impact of the built environment.The team is studying “how company founders make decisions across four key areas, starting from the opportunity recognition, how they are selecting the team members, how they are selecting the business model, identifying the most automatic strategy, all the way through the product market fit to gain an understanding of the key governing parameters in each of these areas,” explains Bayomi.The team is “also developing a large language model that will guide the selection of the business model by using large datasets from different companies in Germany and the U.S. We train the model based on the specific industry sector, such as a technology solution or a data solution, to find what would be the most suitable business model that would increase the success probability of a company,” she says.The project falls under several of the United Nations’ Sustainable Development Goals, including economic growth, innovation and infrastructure, sustainable cities and communities, and climate action.Furthering the goals of the HPI-MIT Joint Research ProgramThese three diverse projects all advance the mission of the HPI-MIT collaboration. MIT MAD aims to use design to transform learning, catalyze innovation, and empower society by inspiring people from all disciplines to interweave design into problem-solving. HPI uses digital engineering concentrated on the development and research of user-oriented innovations for all areas of life.Interdisciplinary teams with members from both institutions are encouraged to develop and submit proposals for ambitious, sustainable projects that use design strategically to generate measurable, impactful solutions to the world’s problems. More

  • in

    Exploring frontiers of mechanical engineering

    From cutting-edge robotics, design, and bioengineering to sustainable energy solutions, ocean engineering, nanotechnology, and innovative materials science, MechE students and their advisors are doing incredibly innovative work. The graduate students highlighted here represent a snapshot of the great work in progress this spring across the Department of Mechanical Engineering, and demonstrate the ways the future of this field is as limitless as the imaginations of its practitioners.Democratizing design through AILyle RegenwetterHometown: Champaign, IllinoisAdvisor: Assistant Professor Faez AhmedInterests: Food, climbing, skiing, soccer, tennis, cookingLyle Regenwetter finds excitement in the prospect of generative AI to “democratize” design and enable inexperienced designers to tackle complex design problems. His research explores new training methods through which generative AI models can be taught to implicitly obey design constraints and synthesize higher-performing designs. Knowing that prospective designers often have an intimate knowledge of the needs of users, but may otherwise lack the technical training to create solutions, Regenwetter also develops human-AI collaborative tools that allow AI models to interact and support designers in popular CAD software and real design problems. Solving a whale of a problem Loïcka BailleHometown: L’Escale, FranceAdvisor: Daniel ZitterbartInterests: Being outdoors — scuba diving, spelunking, or climbing. Sailing on the Charles River, martial arts classes, and playing volleyballLoïcka Baille’s research focuses on developing remote sensing technologies to study and protect marine life. Her main project revolves around improving onboard whale detection technology to prevent vessel strikes, with a special focus on protecting North Atlantic right whales. Baille is also involved in an ongoing study of Emperor penguins. Her team visits Antarctica annually to tag penguins and gather data to enhance their understanding of penguin population dynamics and draw conclusions regarding the overall health of the ecosystem.Water, water anywhereCarlos Díaz-MarínHometown: San José, Costa RicaAdvisor: Professor Gang Chen | Former Advisor: Professor Evelyn WangInterests: New England hiking, biking, and dancingCarlos Díaz-Marín designs and synthesizes inexpensive salt-polymer materials that can capture large amounts of humidity from the air. He aims to change the way we generate potable water from the air, even in arid conditions. In addition to water generation, these salt-polymer materials can also be used as thermal batteries, capable of storing and reusing heat. Beyond the scientific applications, Díaz-Marín is excited to continue doing research that can have big social impacts, and that finds and explains new physical phenomena. As a LatinX person, Díaz-Marín is also driven to help increase diversity in STEM.Scalable fabrication of nano-architected materialsSomayajulu DhulipalaHometown: Hyderabad, IndiaAdvisor: Assistant Professor Carlos PortelaInterests: Space exploration, taekwondo, meditation.Somayajulu Dhulipala works on developing lightweight materials with tunable mechanical properties. He is currently working on methods for the scalable fabrication of nano-architected materials and predicting their mechanical properties. The ability to fine-tune the mechanical properties of specific materials brings versatility and adaptability, making these materials suitable for a wide range of applications across multiple industries. While the research applications are quite diverse, Dhulipala is passionate about making space habitable for humanity, a crucial step toward becoming a spacefaring civilization.Ingestible health-care devicesJimmy McRaeHometown: Woburn, MassachusettsAdvisor: Associate Professor Giovani TraversoInterests: Anything basketball-related: playing, watching, going to games, organizing hometown tournaments Jimmy McRae aims to drastically improve diagnostic and therapeutic capabilities through noninvasive health-care technologies. His research focuses on leveraging materials, mechanics, embedded systems, and microfabrication to develop novel ingestible electronic and mechatronic devices. This ranges from ingestible electroceutical capsules that modulate hunger-regulating hormones to devices capable of continuous ultralong monitoring and remotely triggerable actuations from within the stomach. The principles that guide McRae’s work to develop devices that function in extreme environments can be applied far beyond the gastrointestinal tract, with applications for outer space, the ocean, and more.Freestyle BMX meets machine learningEva NatesHometown: Narberth, Pennsylvania Advisor: Professor Peko HosoiInterests: Rowing, running, biking, hiking, bakingEva Nates is working with the Australian Cycling Team to create a tool to classify Bicycle Motocross Freestyle (BMX FS) tricks. She uses a singular value decomposition method to conduct a principal component analysis of the time-dependent point-tracking data of an athlete and their bike during a run to classify each trick. The 2024 Olympic team hopes to incorporate this tool in their training workflow, and Nates worked alongside the team at their facilities on the Gold Coast of Australia during MIT’s Independent Activities Period in January.Augmenting Astronauts with Wearable Limbs Erik BallesterosHometown: Spring, TexasAdvisor: Professor Harry AsadaInterests: Cosplay, Star Wars, Lego bricksErik Ballesteros’s research seeks to support astronauts who are conducting planetary extravehicular activities through the use of supernumerary robotic limbs (SuperLimbs). His work is tailored toward design and control manifestation to assist astronauts with post-fall recovery, human-leader/robot-follower quadruped locomotion, and coordinated manipulation between the SuperLimbs and the astronaut to perform tasks like excavation and sample handling.This article appeared in the Spring 2024 edition of the Department of Mechanical Engineering’s magazine, MechE Connects.  More

  • in

    An AI dataset carves new paths to tornado detection

    The return of spring in the Northern Hemisphere touches off tornado season. A tornado’s twisting funnel of dust and debris seems an unmistakable sight. But that sight can be obscured to radar, the tool of meteorologists. It’s hard to know exactly when a tornado has formed, or even why.

    A new dataset could hold answers. It contains radar returns from thousands of tornadoes that have hit the United States in the past 10 years. Storms that spawned tornadoes are flanked by other severe storms, some with nearly identical conditions, that never did. MIT Lincoln Laboratory researchers who curated the dataset, called TorNet, have now released it open source. They hope to enable breakthroughs in detecting one of nature’s most mysterious and violent phenomena.

    “A lot of progress is driven by easily available, benchmark datasets. We hope TorNet will lay a foundation for machine learning algorithms to both detect and predict tornadoes,” says Mark Veillette, the project’s co-principal investigator with James Kurdzo. Both researchers work in the Air Traffic Control Systems Group. 

    Along with the dataset, the team is releasing models trained on it. The models show promise for machine learning’s ability to spot a twister. Building on this work could open new frontiers for forecasters, helping them provide more accurate warnings that might save lives. 

    Swirling uncertainty

    About 1,200 tornadoes occur in the United States every year, causing millions to billions of dollars in economic damage and claiming 71 lives on average. Last year, one unusually long-lasting tornado killed 17 people and injured at least 165 others along a 59-mile path in Mississippi.  

    Yet tornadoes are notoriously difficult to forecast because scientists don’t have a clear picture of why they form. “We can see two storms that look identical, and one will produce a tornado and one won’t. We don’t fully understand it,” Kurdzo says.

    A tornado’s basic ingredients are thunderstorms with instability caused by rapidly rising warm air and wind shear that causes rotation. Weather radar is the primary tool used to monitor these conditions. But tornadoes lay too low to be detected, even when moderately close to the radar. As the radar beam with a given tilt angle travels further from the antenna, it gets higher above the ground, mostly seeing reflections from rain and hail carried in the “mesocyclone,” the storm’s broad, rotating updraft. A mesocyclone doesn’t always produce a tornado.

    With this limited view, forecasters must decide whether or not to issue a tornado warning. They often err on the side of caution. As a result, the rate of false alarms for tornado warnings is more than 70 percent. “That can lead to boy-who-cried-wolf syndrome,” Kurdzo says.  

    In recent years, researchers have turned to machine learning to better detect and predict tornadoes. However, raw datasets and models have not always been accessible to the broader community, stifling progress. TorNet is filling this gap.

    The dataset contains more than 200,000 radar images, 13,587 of which depict tornadoes. The rest of the images are non-tornadic, taken from storms in one of two categories: randomly selected severe storms or false-alarm storms (those that led a forecaster to issue a warning but that didn’t produce a tornado).

    Each sample of a storm or tornado comprises two sets of six radar images. The two sets correspond to different radar sweep angles. The six images portray different radar data products, such as reflectivity (showing precipitation intensity) or radial velocity (indicating if winds are moving toward or away from the radar).

    A challenge in curating the dataset was first finding tornadoes. Within the corpus of weather radar data, tornadoes are extremely rare events. The team then had to balance those tornado samples with difficult non-tornado samples. If the dataset were too easy, say by comparing tornadoes to snowstorms, an algorithm trained on the data would likely over-classify storms as tornadic.

    “What’s beautiful about a true benchmark dataset is that we’re all working with the same data, with the same level of difficulty, and can compare results,” Veillette says. “It also makes meteorology more accessible to data scientists, and vice versa. It becomes easier for these two parties to work on a common problem.”

    Both researchers represent the progress that can come from cross-collaboration. Veillette is a mathematician and algorithm developer who has long been fascinated by tornadoes. Kurdzo is a meteorologist by training and a signal processing expert. In grad school, he chased tornadoes with custom-built mobile radars, collecting data to analyze in new ways.

    “This dataset also means that a grad student doesn’t have to spend a year or two building a dataset. They can jump right into their research,” Kurdzo says.

    This project was funded by Lincoln Laboratory’s Climate Change Initiative, which aims to leverage the laboratory’s diverse technical strengths to help address climate problems threatening human health and global security.

    Chasing answers with deep learning

    Using the dataset, the researchers developed baseline artificial intelligence (AI) models. They were particularly eager to apply deep learning, a form of machine learning that excels at processing visual data. On its own, deep learning can extract features (key observations that an algorithm uses to make a decision) from images across a dataset. Other machine learning approaches require humans to first manually label features. 

    “We wanted to see if deep learning could rediscover what people normally look for in tornadoes and even identify new things that typically aren’t searched for by forecasters,” Veillette says.

    The results are promising. Their deep learning model performed similar to or better than all tornado-detecting algorithms known in literature. The trained algorithm correctly classified 50 percent of weaker EF-1 tornadoes and over 85 percent of tornadoes rated EF-2 or higher, which make up the most devastating and costly occurrences of these storms.

    They also evaluated two other types of machine-learning models, and one traditional model to compare against. The source code and parameters of all these models are freely available. The models and dataset are also described in a paper submitted to a journal of the American Meteorological Society (AMS). Veillette presented this work at the AMS Annual Meeting in January.

    “The biggest reason for putting our models out there is for the community to improve upon them and do other great things,” Kurdzo says. “The best solution could be a deep learning model, or someone might find that a non-deep learning model is actually better.”

    TorNet could be useful in the weather community for others uses too, such as for conducting large-scale case studies on storms. It could also be augmented with other data sources, like satellite imagery or lightning maps. Fusing multiple types of data could improve the accuracy of machine learning models.

    Taking steps toward operations

    On top of detecting tornadoes, Kurdzo hopes that models might help unravel the science of why they form.

    “As scientists, we see all these precursors to tornadoes — an increase in low-level rotation, a hook echo in reflectivity data, specific differential phase (KDP) foot and differential reflectivity (ZDR) arcs. But how do they all go together? And are there physical manifestations we don’t know about?” he asks.

    Teasing out those answers might be possible with explainable AI. Explainable AI refers to methods that allow a model to provide its reasoning, in a format understandable to humans, of why it came to a certain decision. In this case, these explanations might reveal physical processes that happen before tornadoes. This knowledge could help train forecasters, and models, to recognize the signs sooner. 

    “None of this technology is ever meant to replace a forecaster. But perhaps someday it could guide forecasters’ eyes in complex situations, and give a visual warning to an area predicted to have tornadic activity,” Kurdzo says.

    Such assistance could be especially useful as radar technology improves and future networks potentially grow denser. Data refresh rates in a next-generation radar network are expected to increase from every five minutes to approximately one minute, perhaps faster than forecasters can interpret the new information. Because deep learning can process huge amounts of data quickly, it could be well-suited for monitoring radar returns in real time, alongside humans. Tornadoes can form and disappear in minutes.

    But the path to an operational algorithm is a long road, especially in safety-critical situations, Veillette says. “I think the forecaster community is still, understandably, skeptical of machine learning. One way to establish trust and transparency is to have public benchmark datasets like this one. It’s a first step.”

    The next steps, the team hopes, will be taken by researchers across the world who are inspired by the dataset and energized to build their own algorithms. Those algorithms will in turn go into test beds, where they’ll eventually be shown to forecasters, to start a process of transitioning into operations.

    In the end, the path could circle back to trust.

    “We may never get more than a 10- to 15-minute tornado warning using these tools. But if we could lower the false-alarm rate, we could start to make headway with public perception,” Kurdzo says. “People are going to use those warnings to take the action they need to save their lives.” More

  • in

    Advancing technology for aquaculture

    According to the National Oceanic and Atmospheric Administration, aquaculture in the United States represents a $1.5 billion industry annually. Like land-based farming, shellfish aquaculture requires healthy seed production in order to maintain a sustainable industry. Aquaculture hatchery production of shellfish larvae — seeds — requires close monitoring to track mortality rates and assess health from the earliest stages of life. 

    Careful observation is necessary to inform production scheduling, determine effects of naturally occurring harmful bacteria, and ensure sustainable seed production. This is an essential step for shellfish hatcheries but is currently a time-consuming manual process prone to human error. 

    With funding from MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), MIT Sea Grant is working with Associate Professor Otto Cordero of the MIT Department of Civil and Environmental Engineering, Professor Taskin Padir and Research Scientist Mark Zolotas at the Northeastern University Institute for Experiential Robotics, and others at the Aquaculture Research Corporation (ARC), and the Cape Cod Commercial Fishermen’s Alliance, to advance technology for the aquaculture industry. Located on Cape Cod, ARC is a leading shellfish hatchery, farm, and wholesaler that plays a vital role in providing high-quality shellfish seed to local and regional growers.

    Two MIT students have joined the effort this semester, working with Robert Vincent, MIT Sea Grant’s assistant director of advisory services, through the Undergraduate Research Opportunities Program (UROP). 

    First-year student Unyime Usua and sophomore Santiago Borrego are using microscopy images of shellfish seed from ARC to train machine learning algorithms that will help automate the identification and counting process. The resulting user-friendly image recognition tool aims to aid aquaculturists in differentiating and counting healthy, unhealthy, and dead shellfish larvae, improving accuracy and reducing time and effort.

    Vincent explains that AI is a powerful tool for environmental science that enables researchers, industry, and resource managers to address challenges that have long been pinch points for accurate data collection, analysis, predictions, and streamlining processes. “Funding support from programs like J-WAFS enable us to tackle these problems head-on,” he says. 

    ARC faces challenges with manually quantifying larvae classes, an important step in their seed production process. “When larvae are in their growing stages they are constantly being sized and counted,” explains Cheryl James, ARC larval/juvenile production manager. “This process is critical to encourage optimal growth and strengthen the population.” 

    Developing an automated identification and counting system will help to improve this step in the production process with time and cost benefits. “This is not an easy task,” says Vincent, “but with the guidance of Dr. Zolotas at the Northeastern University Institute for Experiential Robotics and the work of the UROP students, we have made solid progress.” 

    The UROP program benefits both researchers and students. Involving MIT UROP students in developing these types of systems provides insights into AI applications that they might not have considered, providing opportunities to explore, learn, and apply themselves while contributing to solving real challenges.

    Borrego saw this project as an opportunity to apply what he’d learned in class 6.390 (Introduction to Machine Learning) to a real-world issue. “I was starting to form an idea of how computers can see images and extract information from them,” he says. “I wanted to keep exploring that.”

    Usua decided to pursue the project because of the direct industry impacts it could have. “I’m pretty interested in seeing how we can utilize machine learning to make people’s lives easier. We are using AI to help biologists make this counting and identification process easier.” While Usua wasn’t familiar with aquaculture before starting this project, she explains, “Just hearing about the hatcheries that Dr. Vincent was telling us about, it was unfortunate that not a lot of people know what’s going on and the problems that they’re facing.”

    On Cape Cod alone, aquaculture is an $18 million per year industry. But the Massachusetts Division of Marine Fisheries estimates that hatcheries are only able to meet 70–80 percent of seed demand annually, which impacts local growers and economies. Through this project, the partners aim to develop technology that will increase seed production, advance industry capabilities, and help understand and improve the hatchery microbiome.

    Borrego explains the initial challenge of having limited data to work with. “Starting out, we had to go through and label all of the data, but going through that process helped me learn a lot.” In true MIT fashion, he shares his takeaway from the project: “Try to get the best out of what you’re given with the data you have to work with. You’re going to have to adapt and change your strategies depending on what you have.”

    Usua describes her experience going through the research process, communicating in a team, and deciding what approaches to take. “Research is a difficult and long process, but there is a lot to gain from it because it teaches you to look for things on your own and find your own solutions to problems.”

    In addition to increasing seed production and reducing the human labor required in the hatchery process, the collaborators expect this project to contribute to cost savings and technology integration to support one of the most underserved industries in the United States. 

    Borrego and Usua both plan to continue their work for a second semester with MIT Sea Grant. Borrego is interested in learning more about how technology can be used to protect the environment and wildlife. Usua says she hopes to explore more projects related to aquaculture. “It seems like there’s an infinite amount of ways to tackle these issues.” More

  • in

    Using deep learning to image the Earth’s planetary boundary layer

    Although the troposphere is often thought of as the closest layer of the atmosphere to the Earth’s surface, the planetary boundary layer (PBL) — the lowest layer of the troposphere — is actually the part that most significantly influences weather near the surface. In the 2018 planetary science decadal survey, the PBL was raised as an important scientific issue that has the potential to enhance storm forecasting and improve climate projections.  

    “The PBL is where the surface interacts with the atmosphere, including exchanges of moisture and heat that help lead to severe weather and a changing climate,” says Adam Milstein, a technical staff member in Lincoln Laboratory’s Applied Space Systems Group. “The PBL is also where humans live, and the turbulent movement of aerosols throughout the PBL is important for air quality that influences human health.” 

    Although vital for studying weather and climate, important features of the PBL, such as its height, are difficult to resolve with current technology. In the past four years, Lincoln Laboratory staff have been studying the PBL, focusing on two different tasks: using machine learning to make 3D-scanned profiles of the atmosphere, and resolving the vertical structure of the atmosphere more clearly in order to better predict droughts.  

    This PBL-focused research effort builds on more than a decade of related work on fast, operational neural network algorithms developed by Lincoln Laboratory for NASA missions. These missions include the Time-Resolved Observations of Precipitation structure and storm Intensity with a Constellation of Smallsats (TROPICS) mission as well as Aqua, a satellite that collects data about Earth’s water cycle and observes variables such as ocean temperature, precipitation, and water vapor in the atmosphere. These algorithms retrieve temperature and humidity from the satellite instrument data and have been shown to significantly improve the accuracy and usable global coverage of the observations over previous approaches. For TROPICS, the algorithms help retrieve data that are used to characterize a storm’s rapidly evolving structures in near-real time, and for Aqua, it has helped increase forecasting models, drought monitoring, and fire prediction. 

    These operational algorithms for TROPICS and Aqua are based on classic “shallow” neural networks to maximize speed and simplicity, creating a one-dimensional vertical profile for each spectral measurement collected by the instrument over each location. While this approach has improved observations of the atmosphere down to the surface overall, including the PBL, laboratory staff determined that newer “deep” learning techniques that treat the atmosphere over a region of interest as a three-dimensional image are needed to improve PBL details further.

    “We hypothesized that deep learning and artificial intelligence (AI) techniques could improve on current approaches by incorporating a better statistical representation of 3D temperature and humidity imagery of the atmosphere into the solutions,” Milstein says. “But it took a while to figure out how to create the best dataset — a mix of real and simulated data; we needed to prepare to train these techniques.”

    The team collaborated with Joseph Santanello of the NASA Goddard Space Flight Center and William Blackwell, also of the Applied Space Systems Group, in a recent NASA-funded effort showing that these retrieval algorithms can improve PBL detail, including more accurate determination of the PBL height than the previous state of the art. 

    While improved knowledge of the PBL is broadly useful for increasing understanding of climate and weather, one key application is prediction of droughts. According to a Global Drought Snapshot report released last year, droughts are a pressing planetary issue that the global community needs to address. Lack of humidity near the surface, specifically at the level of the PBL, is the leading indicator of drought. While previous studies using remote-sensing techniques have examined the humidity of soil to determine drought risk, studying the atmosphere can help predict when droughts will happen.  

    In an effort funded by Lincoln Laboratory’s Climate Change Initiative, Milstein, along with laboratory staff member Michael Pieper, are working with scientists at NASA’s Jet Propulsion Laboratory (JPL) to use neural network techniques to improve drought prediction over the continental United States. While the work builds off of existing operational work JPL has done incorporating (in part) the laboratory’s operational “shallow” neural network approach for Aqua, the team believes that this work and the PBL-focused deep learning research work can be combined to further improve the accuracy of drought prediction. 

    “Lincoln Laboratory has been working with NASA for more than a decade on neural network algorithms for estimating temperature and humidity in the atmosphere from space-borne infrared and microwave instruments, including those on the Aqua spacecraft,” Milstein says. “Over that time, we have learned a lot about this problem by working with the science community, including learning about what scientific challenges remain. Our long experience working on this type of remote sensing with NASA scientists, as well as our experience with using neural network techniques, gave us a unique perspective.”

    According to Milstein, the next step for this project is to compare the deep learning results to datasets from the National Oceanic and Atmospheric Administration, NASA, and the Department of Energy collected directly in the PBL using radiosondes, a type of instrument flown on a weather balloon. “These direct measurements can be considered a kind of ‘ground truth’ to quantify the accuracy of the techniques we have developed,” Milstein says.

    This improved neural network approach holds promise to demonstrate drought prediction that can exceed the capabilities of existing indicators, Milstein says, and to be a tool that scientists can rely on for decades to come. More

  • in

    Extracting hydrogen from rocks

    It’s commonly thought that the most abundant element in the universe, hydrogen, exists mainly alongside other elements — with oxygen in water, for example, and with carbon in methane. But naturally occurring underground pockets of pure hydrogen are punching holes in that notion — and generating attention as a potentially unlimited source of carbon-free power. One interested party is the U.S. Department of Energy, which last month awarded $20 million in research grants to 18 teams from laboratories, universities, and private companies to develop technologies that can lead to cheap, clean fuel from the subsurface. Geologic hydrogen, as it’s known, is produced when water reacts with iron-rich rocks, causing the iron to oxidize. One of the grant recipients, MIT Assistant Professor Iwnetim Abate’s research group, will use its $1.3 million grant to determine the ideal conditions for producing hydrogen underground — considering factors such as catalysts to initiate the chemical reaction, temperature, pressure, and pH levels. The goal is to improve efficiency for large-scale production, meeting global energy needs at a competitive cost. The U.S. Geological Survey estimates there are potentially billions of tons of geologic hydrogen buried in the Earth’s crust. Accumulations have been discovered worldwide, and a slew of startups are searching for extractable deposits. Abate is looking to jump-start the natural hydrogen production process, implementing “proactive” approaches that involve stimulating production and harvesting the gas.                                                                                                                         “We aim to optimize the reaction parameters to make the reaction faster and produce hydrogen in an economically feasible manner,” says Abate, the Chipman Development Professor in the Department of Materials Science and Engineering (DMSE). Abate’s research centers on designing materials and technologies for the renewable energy transition, including next-generation batteries and novel chemical methods for energy storage. 

    Sparking innovation

    Interest in geologic hydrogen is growing at a time when governments worldwide are seeking carbon-free energy alternatives to oil and gas. In December, French President Emmanuel Macron said his government would provide funding to explore natural hydrogen. And in February, government and private sector witnesses briefed U.S. lawmakers on opportunities to extract hydrogen from the ground. Today commercial hydrogen is manufactured at $2 a kilogram, mostly for fertilizer and chemical and steel production, but most methods involve burning fossil fuels, which release Earth-heating carbon. “Green hydrogen,” produced with renewable energy, is promising, but at $7 per kilogram, it’s expensive. “If you get hydrogen at a dollar a kilo, it’s competitive with natural gas on an energy-price basis,” says Douglas Wicks, a program director at Advanced Research Projects Agency – Energy (ARPA-E), the Department of Energy organization leading the geologic hydrogen grant program. Recipients of the ARPA-E grants include Colorado School of Mines, Texas Tech University, and Los Alamos National Laboratory, plus private companies including Koloma, a hydrogen production startup that has received funding from Amazon and Bill Gates. The projects themselves are diverse, ranging from applying industrial oil and gas methods for hydrogen production and extraction to developing models to understand hydrogen formation in rocks. The purpose: to address questions in what Wicks calls a “total white space.” “In geologic hydrogen, we don’t know how we can accelerate the production of it, because it’s a chemical reaction, nor do we really understand how to engineer the subsurface so that we can safely extract it,” Wicks says. “We’re trying to bring in the best skills of each of the different groups to work on this under the idea that the ensemble should be able to give us good answers in a fairly rapid timeframe.” Geochemist Viacheslav Zgonnik, one of the foremost experts in the natural hydrogen field, agrees that the list of unknowns is long, as is the road to the first commercial projects. But he says efforts to stimulate hydrogen production — to harness the natural reaction between water and rock — present “tremendous potential.” “The idea is to find ways we can accelerate that reaction and control it so we can produce hydrogen on demand in specific places,” says Zgonnik, CEO and founder of Natural Hydrogen Energy, a Denver-based startup that has mineral leases for exploratory drilling in the United States. “If we can achieve that goal, it means that we can potentially replace fossil fuels with stimulated hydrogen.”

    “A full-circle moment”

    For Abate, the connection to the project is personal. As a child in his hometown in Ethiopia, power outages were a usual occurrence — the lights would be out three, maybe four days a week. Flickering candles or pollutant-emitting kerosene lamps were often the only source of light for doing homework at night. “And for the household, we had to use wood and charcoal for chores such as cooking,” says Abate. “That was my story all the way until the end of high school and before I came to the U.S. for college.” In 1987, well-diggers drilling for water in Mali in Western Africa uncovered a natural hydrogen deposit, causing an explosion. Decades later, Malian entrepreneur Aliou Diallo and his Canadian oil and gas company tapped the well and used an engine to burn hydrogen and power electricity in the nearby village. Ditching oil and gas, Diallo launched Hydroma, the world’s first hydrogen exploration enterprise. The company is drilling wells near the original site that have yielded high concentrations of the gas. “So, what used to be known as an energy-poor continent now is generating hope for the future of the world,” Abate says. “Learning about that was a full-circle moment for me. Of course, the problem is global; the solution is global. But then the connection with my personal journey, plus the solution coming from my home continent, makes me personally connected to the problem and to the solution.”

    Experiments that scale

    Abate and researchers in his lab are formulating a recipe for a fluid that will induce the chemical reaction that triggers hydrogen production in rocks. The main ingredient is water, and the team is testing “simple” materials for catalysts that will speed up the reaction and in turn increase the amount of hydrogen produced, says postdoc Yifan Gao. “Some catalysts are very costly and hard to produce, requiring complex production or preparation,” Gao says. “A catalyst that’s inexpensive and abundant will allow us to enhance the production rate — that way, we produce it at an economically feasible rate, but also with an economically feasible yield.” The iron-rich rocks in which the chemical reaction happens can be found across the United States and the world. To optimize the reaction across a diversity of geological compositions and environments, Abate and Gao are developing what they call a high-throughput system, consisting of artificial intelligence software and robotics, to test different catalyst mixtures and simulate what would happen when applied to rocks from various regions, with different external conditions like temperature and pressure. “And from that we measure how much hydrogen we are producing for each possible combination,” Abate says. “Then the AI will learn from the experiments and suggest to us, ‘Based on what I’ve learned and based on the literature, I suggest you test this composition of catalyst material for this rock.’” The team is writing a paper on its project and aims to publish its findings in the coming months. The next milestones for the project, after developing the catalyst recipe, is designing a reactor that will serve two purposes. First, fitted with technologies such as Raman spectroscopy, it will allow researchers to identify and optimize the chemical conditions that lead to improved rates and yield of hydrogen production. The lab-scale device will also inform the design of a real-world reactor that can accelerate hydrogen production in the field. “That would be a plant-scale reactor that would be implanted into the subsurface,” Abate says. The cross-disciplinary project is also tapping the expertise of Yang Shao-Horn, of MIT’s Department of Mechanical Engineering and DMSE, for computational analysis of the catalyst, and Esteban Gazel, a Cornell University scientist who will lend his expertise in geology and geochemistry. He’ll focus on understanding the iron-rich ultramafic rock formations across the United States and the globe and how they react with water. For Wicks at ARPA-E, the questions Abate and the other grant recipients are asking are just the first, critical steps in uncharted energy territory. “If we can understand how to stimulate these rocks into generating hydrogen, safely getting it up, it really unleashes the potential energy source,” he says. Then the emerging industry will look to oil and gas for the drilling, piping, and gas extraction know-how. “As I like to say, this is enabling technology that we hope to, in a very short term, enable us to say, ‘Is there really something there?’” More