More stories

  • in

    Advancing urban tree monitoring with AI-powered digital twins

    The Irish philosopher George Berkely, best known for his theory of immaterialism, once famously mused, “If a tree falls in a forest and no one is around to hear it, does it make a sound?”What about AI-generated trees? They probably wouldn’t make a sound, but they will be critical nonetheless for applications such as adaptation of urban flora to climate change. To that end, the novel “Tree-D Fusion” system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Google, and Purdue University merges AI and tree-growth models with Google’s Auto Arborist data to create accurate 3D models of existing urban trees. The project has produced the first-ever large-scale database of 600,000 environmentally aware, simulation-ready tree models across North America.“We’re bridging decades of forestry science with modern AI capabilities,” says Sara Beery, MIT electrical engineering and computer science (EECS) assistant professor, MIT CSAIL principal investigator, and a co-author on a new paper about Tree-D Fusion. “This allows us to not just identify trees in cities, but to predict how they’ll grow and impact their surroundings over time. We’re not ignoring the past 30 years of work in understanding how to build these 3D synthetic models; instead, we’re using AI to make this existing knowledge more useful across a broader set of individual trees in cities around North America, and eventually the globe.”Tree-D Fusion builds on previous urban forest monitoring efforts that used Google Street View data, but branches it forward by generating complete 3D models from single images. While earlier attempts at tree modeling were limited to specific neighborhoods, or struggled with accuracy at scale, Tree-D Fusion can create detailed models that include typically hidden features, such as the back side of trees that aren’t visible in street-view photos.The technology’s practical applications extend far beyond mere observation. City planners could use Tree-D Fusion to one day peer into the future, anticipating where growing branches might tangle with power lines, or identifying neighborhoods where strategic tree placement could maximize cooling effects and air quality improvements. These predictive capabilities, the team says, could change urban forest management from reactive maintenance to proactive planning.A tree grows in Brooklyn (and many other places)The researchers took a hybrid approach to their method, using deep learning to create a 3D envelope of each tree’s shape, then using traditional procedural models to simulate realistic branch and leaf patterns based on the tree’s genus. This combo helped the model predict how trees would grow under different environmental conditions and climate scenarios, such as different possible local temperatures and varying access to groundwater.Now, as cities worldwide grapple with rising temperatures, this research offers a new window into the future of urban forests. In a collaboration with MIT’s Senseable City Lab, the Purdue University and Google team is embarking on a global study that re-imagines trees as living climate shields. Their digital modeling system captures the intricate dance of shade patterns throughout the seasons, revealing how strategic urban forestry could hopefully change sweltering city blocks into more naturally cooled neighborhoods.“Every time a street mapping vehicle passes through a city now, we’re not just taking snapshots — we’re watching these urban forests evolve in real-time,” says Beery. “This continuous monitoring creates a living digital forest that mirrors its physical counterpart, offering cities a powerful lens to observe how environmental stresses shape tree health and growth patterns across their urban landscape.”AI-based tree modeling has emerged as an ally in the quest for environmental justice: By mapping urban tree canopy in unprecedented detail, a sister project from the Google AI for Nature team has helped uncover disparities in green space access across different socioeconomic areas. “We’re not just studying urban forests — we’re trying to cultivate more equity,” says Beery. The team is now working closely with ecologists and tree health experts to refine these models, ensuring that as cities expand their green canopies, the benefits branch out to all residents equally.It’s a breezeWhile Tree-D fusion marks some major “growth” in the field, trees can be uniquely challenging for computer vision systems. Unlike the rigid structures of buildings or vehicles that current 3D modeling techniques handle well, trees are nature’s shape-shifters — swaying in the wind, interweaving branches with neighbors, and constantly changing their form as they grow. The Tree-D fusion models are “simulation-ready” in that they can estimate the shape of the trees in the future, depending on the environmental conditions.“What makes this work exciting is how it pushes us to rethink fundamental assumptions in computer vision,” says Beery. “While 3D scene understanding techniques like photogrammetry or NeRF [neural radiance fields] excel at capturing static objects, trees demand new approaches that can account for their dynamic nature, where even a gentle breeze can dramatically alter their structure from moment to moment.”The team’s approach of creating rough structural envelopes that approximate each tree’s form has proven remarkably effective, but certain issues remain unsolved. Perhaps the most vexing is the “entangled tree problem;” when neighboring trees grow into each other, their intertwined branches create a puzzle that no current AI system can fully unravel.The scientists see their dataset as a springboard for future innovations in computer vision, and they’re already exploring applications beyond street view imagery, looking to extend their approach to platforms like iNaturalist and wildlife camera traps.“This marks just the beginning for Tree-D Fusion,” says Jae Joong Lee, a Purdue University PhD student who developed, implemented and deployed the Tree-D-Fusion algorithm. “Together with my collaborators, I envision expanding the platform’s capabilities to a planetary scale. Our goal is to use AI-driven insights in service of natural ecosystems — supporting biodiversity, promoting global sustainability, and ultimately, benefiting the health of our entire planet.”Beery and Lee’s co-authors are Jonathan Huang, Scaled Foundations head of AI (formerly of Google); and four others from Purdue University: PhD students Jae Joong Lee and Bosheng Li, Professor and Dean’s Chair of Remote Sensing Songlin Fei, Assistant Professor Raymond Yeh, and Professor and Associate Head of Computer Science Bedrich Benes. Their work is based on efforts supported by the United States Department of Agriculture’s (USDA) Natural Resources Conservation Service and is directly supported by the USDA’s National Institute of Food and Agriculture. The researchers presented their findings at the European Conference on Computer Vision this month.  More

  • in

    AI method radically speeds predictions of materials’ thermal properties

    It is estimated that about 70 percent of the energy generated worldwide ends up as waste heat.If scientists could better predict how heat moves through semiconductors and insulators, they could design more efficient power generation systems. However, the thermal properties of materials can be exceedingly difficult to model.The trouble comes from phonons, which are subatomic particles that carry heat. Some of a material’s thermal properties depend on a measurement called the phonon dispersion relation, which can be incredibly hard to obtain, let alone utilize in the design of a system.A team of researchers from MIT and elsewhere tackled this challenge by rethinking the problem from the ground up. The result of their work is a new machine-learning framework that can predict phonon dispersion relations up to 1,000 times faster than other AI-based techniques, with comparable or even better accuracy. Compared to more traditional, non-AI-based approaches, it could be 1 million times faster.This method could help engineers design energy generation systems that produce more power, more efficiently. It could also be used to develop more efficient microelectronics, since managing heat remains a major bottleneck to speeding up electronics.“Phonons are the culprit for the thermal loss, yet obtaining their properties is notoriously challenging, either computationally or experimentally,” says Mingda Li, associate professor of nuclear science and engineering and senior author of a paper on this technique.Li is joined on the paper by co-lead authors Ryotaro Okabe, a chemistry graduate student; and Abhijatmedhi Chotrattanapituk, an electrical engineering and computer science graduate student; Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science at MIT; as well as others at MIT, Argonne National Laboratory, Harvard University, the University of South Carolina, Emory University, the University of California at Santa Barbara, and Oak Ridge National Laboratory. The research appears in Nature Computational Science.Predicting phononsHeat-carrying phonons are tricky to predict because they have an extremely wide frequency range, and the particles interact and travel at different speeds.A material’s phonon dispersion relation is the relationship between energy and momentum of phonons in its crystal structure. For years, researchers have tried to predict phonon dispersion relations using machine learning, but there are so many high-precision calculations involved that models get bogged down.“If you have 100 CPUs and a few weeks, you could probably calculate the phonon dispersion relation for one material. The whole community really wants a more efficient way to do this,” says Okabe.The machine-learning models scientists often use for these calculations are known as graph neural networks (GNN). A GNN converts a material’s atomic structure into a crystal graph comprising multiple nodes, which represent atoms, connected by edges, which represent the interatomic bonding between atoms.While GNNs work well for calculating many quantities, like magnetization or electrical polarization, they are not flexible enough to efficiently predict an extremely high-dimensional quantity like the phonon dispersion relation. Because phonons can travel around atoms on X, Y, and Z axes, their momentum space is hard to model with a fixed graph structure.To gain the flexibility they needed, Li and his collaborators devised virtual nodes.They create what they call a virtual node graph neural network (VGNN) by adding a series of flexible virtual nodes to the fixed crystal structure to represent phonons. The virtual nodes enable the output of the neural network to vary in size, so it is not restricted by the fixed crystal structure.Virtual nodes are connected to the graph in such a way that they can only receive messages from real nodes. While virtual nodes will be updated as the model updates real nodes during computation, they do not affect the accuracy of the model.“The way we do this is very efficient in coding. You just generate a few more nodes in your GNN. The physical location doesn’t matter, and the real nodes don’t even know the virtual nodes are there,” says Chotrattanapituk.Cutting out complexitySince it has virtual nodes to represent phonons, the VGNN can skip many complex calculations when estimating phonon dispersion relations, which makes the method more efficient than a standard GNN. The researchers proposed three different versions of VGNNs with increasing complexity. Each can be used to predict phonons directly from a material’s atomic coordinates.Because their approach has the flexibility to rapidly model high-dimensional properties, they can use it to estimate phonon dispersion relations in alloy systems. These complex combinations of metals and nonmetals are especially challenging for traditional approaches to model.The researchers also found that VGNNs offered slightly greater accuracy when predicting a material’s heat capacity. In some instances, prediction errors were two orders of magnitude lower with their technique.A VGNN could be used to calculate phonon dispersion relations for a few thousand materials in just a few seconds with a personal computer, Li says.This efficiency could enable scientists to search a larger space when seeking materials with certain thermal properties, such as superior thermal storage, energy conversion, or superconductivity.Moreover, the virtual node technique is not exclusive to phonons, and could also be used to predict challenging optical and magnetic properties.In the future, the researchers want to refine the technique so virtual nodes have greater sensitivity to capture small changes that can affect phonon structure.“Researchers got too comfortable using graph nodes to represent atoms, but we can rethink that. Graph nodes can be anything. And virtual nodes are a very generic approach you could use to predict a lot of high-dimensional quantities,” Li says.“The authors’ innovative approach significantly augments the graph neural network description of solids by incorporating key physics-informed elements through virtual nodes, for instance, informing wave-vector dependent band-structures and dynamical matrices,” says Olivier Delaire, associate professor in the Thomas Lord Department of Mechanical Engineering and Materials Science at Duke University, who was not involved with this work. “I find that the level of acceleration in predicting complex phonon properties is amazing, several orders of magnitude faster than a state-of-the-art universal machine-learning interatomic potential. Impressively, the advanced neural net captures fine features and obeys physical rules. There is great potential to expand the model to describe other important material properties: Electronic, optical, and magnetic spectra and band structures come to mind.”This work is supported by the U.S. Department of Energy, National Science Foundation, a Mathworks Fellowship, a Sow-Hsin Chen Fellowship, the Harvard Quantum Initiative, and the Oak Ridge National Laboratory. More

  • in

    School of Engineering welcomes new faculty

    The School of Engineering welcomes 15 new faculty members across six of its academic departments. This new cohort of faculty members, who have either recently started their roles at MIT or will start within the next year, conduct research across a diverse range of disciplines.Many of these new faculty specialize in research that intersects with multiple fields. In addition to positions in the School of Engineering, a number of these faculty have positions at other units across MIT. Faculty with appointments in the Department of Electrical Engineering and Computer Science (EECS) report into both the School of Engineering and the MIT Stephen A. Schwarzman College of Computing. This year, new faculty also have joint appointments between the School of Engineering and the School of Humanities, Arts, and Social Sciences and the School of Science.“I am delighted to welcome this cohort of talented new faculty to the School of Engineering,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and Vannevar Bush Professor of Electrical Engineering and Computer Science. “I am particularly struck by the interdisciplinary approach many of these new faculty take in their research. They are working in areas that are poised to have tremendous impact. I look forward to seeing them grow as researchers and educators.”The new engineering faculty include:Stephen Bates joined the Department of Electrical Engineering and Computer Science as an assistant professor in September 2023. He is also a member of the Laboratory for Information and Decision Systems (LIDS). Bates uses data and AI for reliable decision-making in the presence of uncertainty. In particular, he develops tools for statistical inference with AI models, data impacted by strategic behavior, and settings with distribution shift. Bates also works on applications in life sciences and sustainability. He previously worked as a postdoc in the Statistics and EECS departments at the University of California at Berkeley (UC Berkeley). Bates received a BS in statistics and mathematics at Harvard University and a PhD from Stanford University.Abigail Bodner joined the Department of EECS and Department of Earth, Atmospheric and Planetary Sciences as an assistant professor in January. She is also a member of the LIDS. Bodner’s research interests span climate, physical oceanography, geophysical fluid dynamics, and turbulence. Previously, she worked as a Simons Junior Fellow at the Courant Institute of Mathematical Sciences at New York University. Bodner received her BS in geophysics and mathematics and MS in geophysics from Tel Aviv University, and her SM in applied mathematics and PhD from Brown University.Andreea Bobu ’17 will join the Department of Aeronautics and Astronautics as an assistant professor in July. Her research sits at the intersection of robotics, mathematical human modeling, and deep learning. Previously, she was a research scientist at the Boston Dynamics AI Institute, focusing on how robots and humans can efficiently arrive at shared representations of their tasks for more seamless and reliable interactions. Bobu earned a BS in computer science and engineering from MIT and a PhD in electrical engineering and computer science from UC Berkeley.Suraj Cheema will join the Department of Materials Science and Engineering, with a joint appointment in the Department of EECS, as an assistant professor in July. His research explores atomic-scale engineering of electronic materials to tackle challenges related to energy consumption, storage, and generation, aiming for more sustainable microelectronics. This spans computing and energy technologies via integrated ferroelectric devices. He previously worked as a postdoc at UC Berkeley. Cheema earned a BS in applied physics and applied mathematics from Columbia University and a PhD in materials science and engineering from UC Berkeley.Samantha Coday joins the Department of EECS as an assistant professor in July. She will also be a member of the MIT Research Laboratory of Electronics. Her research interests include ultra-dense power converters enabling renewable energy integration, hybrid electric aircraft and future space exploration. To enable high-performance converters for these critical applications her research focuses on the optimization, design, and control of hybrid switched-capacitor converters. Coday earned a BS in electrical engineering and mathematics from Southern Methodist University and an MS and a PhD in electrical engineering and computer science from UC Berkeley.Mitchell Gordon will join the Department of EECS as an assistant professor in July. He will also be a member of the MIT Computer Science and Artificial Intelligence Laboratory. In his research, Gordon designs interactive systems and evaluation approaches that bridge principles of human-computer interaction with the realities of machine learning. He currently works as a postdoc at the University of Washington. Gordon received a BS from the University of Rochester, and MS and PhD from Stanford University, all in computer science.Kaiming He joined the Department of EECS as an associate professor in February. He will also be a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). His research interests cover a wide range of topics in computer vision and deep learning. He is currently focused on building computer models that can learn representations and develop intelligence from and for the complex world. Long term, he hopes to augment human intelligence with improved artificial intelligence. Before joining MIT, He was a research scientist at Facebook AI. He earned a BS from Tsinghua University and a PhD from the Chinese University of Hong Kong.Anna Huang SM ’08 will join the departments of EECS and Music and Theater Arts as assistant professor in September. She will help develop graduate programming focused on music technology. Previously, she spent eight years with Magenta at Google Brain and DeepMind, spearheading efforts in generative modeling, reinforcement learning, and human-computer interaction to support human-AI partnerships in music-making. She is the creator of Music Transformer and Coconet (which powered the Bach Google Doodle). She was a judge and organizer for the AI Song Contest. Anna holds a Canada CIFAR AI Chair at Mila, a BM in music composition, and BS in computer science from the University of Southern California, an MS from the MIT Media Lab, and a PhD from Harvard University.Yael Kalai PhD ’06 will join the Department of EECS as a professor in September. She is also a member of CSAIL. Her research interests include cryptography, the theory of computation, and security and privacy. Kalai currently focuses on both the theoretical and real-world applications of cryptography, including work on succinct and easily verifiable non-interactive proofs. She received her bachelor’s degree from the Hebrew University of Jerusalem, a master’s degree at the Weizmann Institute of Science, and a PhD from MIT.Sendhil Mullainathan will join the departments of EECS and Economics as a professor in July. His research uses machine learning to understand complex problems in human behavior, social policy, and medicine. Previously, Mullainathan spent five years at MIT before joining the faculty at Harvard in 2004, and then the University of Chicago in 2018. He received his BA in computer science, mathematics, and economics from Cornell University and his PhD from Harvard University.Alex Rives will join the Department of EECS as an assistant professor in September, with a core membership in the Broad Institute of MIT and Harvard. In his research, Rives is focused on AI for scientific understanding, discovery, and design for biology. Rives worked with Meta as a New York University graduate student, where he founded and led the Evolutionary Scale Modeling team that developed large language models for proteins. Rives received his BS in philosophy and biology from Yale University and is completing his PhD in computer science at NYU.Sungho Shin will join the Department of Chemical Engineering as an assistant professor in July. His research interests include control theory, optimization algorithms, high-performance computing, and their applications to decision-making in complex systems, such as energy infrastructures. Shin is a postdoc at the Mathematics and Computer Science Division at Argonne National Laboratory. He received a BS in mathematics and chemical engineering from Seoul National University and a PhD in chemical engineering from the University of Wisconsin-Madison.Jessica Stark joined the Department of Biological Engineering as an assistant professor in January. In her research, Stark is developing technologies to realize the largely untapped potential of cell-surface sugars, called glycans, for immunological discovery and immunotherapy. Previously, Stark was an American Cancer Society postdoc at Stanford University. She earned a BS in chemical and biomolecular engineering from Cornell University and a PhD in chemical and biological engineering at Northwestern University.Thomas John “T.J.” Wallin joined the Department of Materials Science and Engineering as an assistant professor in January. As a researcher, Wallin’s interests lay in advanced manufacturing of functional soft matter, with an emphasis on soft wearable technologies and their applications in human-computer interfaces. Previously, he was a research scientist at Meta’s Reality Labs Research working in their haptic interaction team. Wallin earned a BS in physics and chemistry from the College of William and Mary, and an MS and PhD in materials science and engineering from Cornell University.Gioele Zardini joined the Department of Civil and Environmental Engineering as an assistant professor in September. He will also join LIDS and the Institute for Data, Systems, and Society. Driven by societal challenges, Zardini’s research interests include the co-design of sociotechnical systems, compositionality in engineering, applied category theory, decision and control, optimization, and game theory, with society-critical applications to intelligent transportation systems, autonomy, and complex networks and infrastructures. He received his BS, MS, and PhD in mechanical engineering with a focus on robotics, systems, and control from ETH Zurich, and spent time at MIT, Stanford University, and Motional. More

  • in

    HPI-MIT design research collaboration creates powerful teams

    The recent ransomware attack on ChangeHealthcare, which severed the network connecting health care providers, pharmacies, and hospitals with health insurance companies, demonstrates just how disruptive supply chain attacks can be. In this case, it hindered the ability of those providing medical services to submit insurance claims and receive payments.This sort of attack and other forms of data theft are becoming increasingly common and often target large, multinational corporations through the small and mid-sized vendors in their corporate supply chains, enabling breaks in these enormous systems of interwoven companies.Cybersecurity researchers at MIT and the Hasso Plattner Institute (HPI) in Potsdam, Germany, are focused on the different organizational security cultures that exist within large corporations and their vendors because it’s that difference that creates vulnerabilities, often due to the lack of emphasis on cybersecurity by the senior leadership in these small to medium-sized enterprises (SMEs).Keri Pearlson, executive director of Cybersecurity at MIT Sloan (CAMS); Jillian Kwong, a research scientist at CAMS; and Christian Doerr, a professor of cybersecurity and enterprise security at HPI, are co-principal investigators (PIs) on the research project, “Culture and the Supply Chain: Transmitting Shared Values, Attitudes and Beliefs across Cybersecurity Supply Chains.”Their project was selected in the 2023 inaugural round of grants from the HPI-MIT Designing for Sustainability program, a multiyear partnership funded by HPI and administered by the MIT Morningside Academy for Design (MAD). The program awards about 10 grants annually of up to $200,000 each to multidisciplinary teams with divergent backgrounds in computer science, artificial intelligence, machine learning, engineering, design, architecture, the natural sciences, humanities, and business and management. The 2024 Call for Applications is open through June 3.Designing for Sustainability grants support scientific research that promotes the United Nations’ Sustainable Development Goals (SDGs) on topics involving sustainable design, innovation, and digital technologies, with teams made up of PIs from both institutions. The PIs on these projects, who have common interests but different strengths, create more powerful teams by working together.Transmitting shared values, attitudes, and beliefs to improve cybersecurity across supply chainsThe MIT and HPI cybersecurity researchers say that most ransomware attacks aren’t reported. Smaller companies hit with ransomware attacks just shut down, because they can’t afford the payment to retrieve their data. This makes it difficult to know just how many attacks and data breaches occur. “As more data and processes move online and into the cloud, it becomes even more important to focus on securing supply chains,” Kwong says. “Investing in cybersecurity allows information to be exchanged freely while keeping data safe. Without it, any progress towards sustainability is stalled.”One of the first large data breaches in the United States to be widely publicized provides a clear example of how an SME cybersecurity can leave a multinational corporation vulnerable to attack. In 2013, hackers entered the Target Corporation’s own network by obtaining the credentials of a small vendor in its supply chain: a Pennsylvania HVAC company. Through that breach, thieves were able to install malware that stole the financial and personal information of 110 million Target customers, which they sold to card shops on the black market.To prevent such attacks, SME vendors in a large corporation’s supply chain are required to agree to follow certain security measures, but the SMEs usually don’t have the expertise or training to make good on these cybersecurity promises, leaving their own systems, and therefore any connected to them, vulnerable to attack.“Right now, organizations are connected economically, but not aligned in terms of organizational culture, values, beliefs, and practices around cybersecurity,” explains Kwong. “Basically, the big companies are realizing the smaller ones are not able to implement all the cybersecurity requirements. We have seen some larger companies address this by reducing requirements or making the process shorter. However, this doesn’t mean companies are more secure; it just lowers the bar for the smaller suppliers to clear it.”Pearlson emphasizes the importance of board members and senior management taking responsibility for cybersecurity in order to change the culture at SMEs, rather than pushing that down to a single department, IT office, or in some cases, one IT employee.The research team is using case studies based on interviews, field studies, focus groups, and direct observation of people in their natural work environments to learn how companies engage with vendors, and the specific ways cybersecurity is implemented, or not, in everyday operations. The goal is to create a shared culture around cybersecurity that can be adopted correctly by all vendors in a supply chain.This approach is in line with the goals of the Charter of Trust Initiative, a partnership of large, multinational corporations formed to establish a better means of implementing cybersecurity in the supply chain network. The HPI-MIT team worked with companies from the Charter of Trust and others last year to understand the impacts of cybersecurity regulation on SME participation in supply chains and develop a conceptual framework to implement changes for stabilizing supply chains.Cybersecurity is a prerequisite needed to achieve any of the United Nations’ SDGs, explains Kwong. Without secure supply chains, access to key resources and institutions can be abruptly cut off. This could include food, clean water and sanitation, renewable energy, financial systems, health care, education, and resilient infrastructure. Securing supply chains helps enable progress on all SDGs, and the HPI-MIT project specifically supports SMEs, which are a pillar of the U.S. and European economies.Personalizing product designs while minimizing material wasteIn a vastly different Designing for Sustainability joint research project that employs AI with engineering, “Personalizing Product Designs While Minimizing Material Waste” will use AI design software to lay out multiple parts of a pattern on a sheet of plywood, acrylic, or other material, so that they can be laser cut to create new products in real time without wasting material.Stefanie Mueller, the TIBCO Career Development Associate Professor in the MIT Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory, and Patrick Baudisch, a professor of computer science and chair of the Human Computer Interaction Lab at HPI, are co-PIs on the project. The two have worked together for years; Baudisch was Mueller’s PhD research advisor at HPI.Baudisch’s lab developed an online design teaching system called Kyub that lets students design 3D objects in pieces that are laser cut from sheets of wood and assembled to become chairs, speaker boxes, radio-controlled aircraft, or even functional musical instruments. For instance, each leg of a chair would consist of four identical vertical pieces attached at the edges to create a hollow-centered column, four of which will provide stability to the chair, even though the material is very lightweight.“By designing and constructing such furniture, students learn not only design, but also structural engineering,” Baudisch says. “Similarly, by designing and constructing musical instruments, they learn about structural engineering, as well as resonance, types of musical tuning, etc.”Mueller was at HPI when Baudisch developed the Kyub software, allowing her to observe “how they were developing and making all the design decisions,” she says. “They built a really neat piece for people to quickly design these types of 3D objects.” However, using Kyub for material-efficient design is not fast; in order to fabricate a model, the software has to break the 3D models down into 2D parts and lay these out on sheets of material. This takes time, and makes it difficult to see the impact of design decisions on material use in real-time.Mueller’s lab at MIT developed software based on a layout algorithm that uses AI to lay out pieces on sheets of material in real time. This allows AI to explore multiple potential layouts while the user is still editing, and thus provide ongoing feedback. “As the user develops their design, Fabricaide decides good placements of parts onto the user’s available materials, provides warnings if the user does not have enough material for a design, and makes suggestions for how the user can resolve insufficient material cases,” according to the project website.The joint MIT-HPI project integrates Mueller’s AI software with Baudisch’s Kyub software and adds machine learning to train the AI to offer better design suggestions that save material while adhering to the user’s design intent.“The project is all about minimizing the waste on these materials sheets,” Mueller says. She already envisions the next step in this AI design process: determining how to integrate the laws of physics into the AI’s knowledge base to ensure the structural integrity and stability of objects it designs.AI-powered startup design for the Anthropocene: Providing guidance for novel enterprisesThrough her work with the teams of MITdesignX and its international programs, Svafa Grönfeldt, faculty director of MITdesignX and professor of the practice in MIT MAD, has helped scores of people in startup companies use the tools and methods of design to ensure that the solution a startup proposes actually fits the problem it seeks to solve. This is often called the problem-solution fit.Grönfeldt and MIT postdoc Norhan Bayomi are now extending this work to incorporate AI into the process, in collaboration with MIT Professor John Fernández and graduate student Tyler Kim. The HPI team includes Professor Gerard de Melo; HPI School of Entrepreneurship Director Frank Pawlitschek; and doctoral student Michael Mansfeld.“The startup ecosystem is characterized by uncertainty and volatility compounded by growing uncertainties in climate and planetary systems,” Grönfeldt says. “Therefore, there is an urgent need for a robust model that can objectively predict startup success and guide design for the Anthropocene.”While startup-success forecasting is gaining popularity, it currently focuses on aiding venture capitalists in selecting companies to fund, rather than guiding the startups in the design of their products, services and business plans.“The coupling of climate and environmental priorities with startup agendas requires deeper analytics for effective enterprise design,” Grönfeldt says. The project aims to explore whether AI-augmented decision-support systems can enhance startup-success forecasting.“We’re trying to develop a machine learning approach that will give a forecasting of probability of success based on a number of parameters, including the type of business model proposed, how the team came together, the team members’ backgrounds and skill sets, the market and industry sector they’re working in and the problem-solution fit,” says Bayomi, who works with Fernández in the MIT Environmental Solutions Initiative. The two are co-founders of the startup Lamarr.AI, which employs robotics and AI to help reduce the carbon dioxide impact of the built environment.The team is studying “how company founders make decisions across four key areas, starting from the opportunity recognition, how they are selecting the team members, how they are selecting the business model, identifying the most automatic strategy, all the way through the product market fit to gain an understanding of the key governing parameters in each of these areas,” explains Bayomi.The team is “also developing a large language model that will guide the selection of the business model by using large datasets from different companies in Germany and the U.S. We train the model based on the specific industry sector, such as a technology solution or a data solution, to find what would be the most suitable business model that would increase the success probability of a company,” she says.The project falls under several of the United Nations’ Sustainable Development Goals, including economic growth, innovation and infrastructure, sustainable cities and communities, and climate action.Furthering the goals of the HPI-MIT Joint Research ProgramThese three diverse projects all advance the mission of the HPI-MIT collaboration. MIT MAD aims to use design to transform learning, catalyze innovation, and empower society by inspiring people from all disciplines to interweave design into problem-solving. HPI uses digital engineering concentrated on the development and research of user-oriented innovations for all areas of life.Interdisciplinary teams with members from both institutions are encouraged to develop and submit proposals for ambitious, sustainable projects that use design strategically to generate measurable, impactful solutions to the world’s problems. More

  • in

    Machine learning facilitates “turbulence tracking” in fusion reactors

    Fusion, which promises practically unlimited, carbon-free energy using the same processes that power the sun, is at the heart of a worldwide research effort that could help mitigate climate change.

    A multidisciplinary team of researchers is now bringing tools and insights from machine learning to aid this effort. Scientists from MIT and elsewhere have used computer-vision models to identify and track turbulent structures that appear under the conditions needed to facilitate fusion reactions.

    Monitoring the formation and movements of these structures, called filaments or “blobs,” is important for understanding the heat and particle flows exiting from the reacting fuel, which ultimately determines the engineering requirements for the reactor walls to meet those flows. However, scientists typically study blobs using averaging techniques, which trade details of individual structures in favor of aggregate statistics. Individual blob information must be tracked by marking them manually in video data. 

    The researchers built a synthetic video dataset of plasma turbulence to make this process more effective and efficient. They used it to train four computer vision models, each of which identifies and tracks blobs. They trained the models to pinpoint blobs in the same ways that humans would.

    When the researchers tested the trained models using real video clips, the models could identify blobs with high accuracy — more than 80 percent in some cases. The models were also able to effectively estimate the size of blobs and the speeds at which they moved.

    Because millions of video frames are captured during just one fusion experiment, using machine-learning models to track blobs could give scientists much more detailed information.

    “Before, we could get a macroscopic picture of what these structures are doing on average. Now, we have a microscope and the computational power to analyze one event at a time. If we take a step back, what this reveals is the power available from these machine-learning techniques, and ways to use these computational resources to make progress,” says Theodore Golfinopoulos, a research scientist at the MIT Plasma Science and Fusion Center and co-author of a paper detailing these approaches.

    His fellow co-authors include lead author Woonghee “Harry” Han, a physics PhD candidate; senior author Iddo Drori, a visiting professor in the Computer Science and Artificial Intelligence Laboratory (CSAIL), faculty associate professor at Boston University, and adjunct at Columbia University; as well as others from the MIT Plasma Science and Fusion Center, the MIT Department of Civil and Environmental Engineering, and the Swiss Federal Institute of Technology at Lausanne in Switzerland. The research appears today in Nature Scientific Reports.

    Heating things up

    For more than 70 years, scientists have sought to use controlled thermonuclear fusion reactions to develop an energy source. To reach the conditions necessary for a fusion reaction, fuel must be heated to temperatures above 100 million degrees Celsius. (The core of the sun is about 15 million degrees Celsius.)

    A common method for containing this super-hot fuel, called plasma, is to use a tokamak. These devices utilize extremely powerful magnetic fields to hold the plasma in place and control the interaction between the exhaust heat from the plasma and the reactor walls.

    However, blobs appear like filaments falling out of the plasma at the very edge, between the plasma and the reactor walls. These random, turbulent structures affect how energy flows between the plasma and the reactor.

    “Knowing what the blobs are doing strongly constrains the engineering performance that your tokamak power plant needs at the edge,” adds Golfinopoulos.

    Researchers use a unique imaging technique to capture video of the plasma’s turbulent edge during experiments. An experimental campaign may last months; a typical day will produce about 30 seconds of data, corresponding to roughly 60 million video frames, with thousands of blobs appearing each second. This makes it impossible to track all blobs manually, so researchers rely on average sampling techniques that only provide broad characteristics of blob size, speed, and frequency.

    “On the other hand, machine learning provides a solution to this by blob-by-blob tracking for every frame, not just average quantities. This gives us much more knowledge about what is happening at the boundary of the plasma,” Han says.

    He and his co-authors took four well-established computer vision models, which are commonly used for applications like autonomous driving, and trained them to tackle this problem.

    Simulating blobs

    To train these models, they created a vast dataset of synthetic video clips that captured the blobs’ random and unpredictable nature.

    “Sometimes they change direction or speed, sometimes multiple blobs merge, or they split apart. These kinds of events were not considered before with traditional approaches, but we could freely simulate those behaviors in the synthetic data,” Han says.

    Creating synthetic data also allowed them to label each blob, which made the training process more effective, Drori adds.

    Using these synthetic data, they trained the models to draw boundaries around blobs, teaching them to closely mimic what a human scientist would draw.

    Then they tested the models using real video data from experiments. First, they measured how closely the boundaries the models drew matched up with actual blob contours.

    But they also wanted to see if the models predicted objects that humans would identify. They asked three human experts to pinpoint the centers of blobs in video frames and checked to see if the models predicted blobs in those same locations.

    The models were able to draw accurate blob boundaries, overlapping with brightness contours which are considered ground-truth, about 80 percent of the time. Their evaluations were similar to those of human experts, and successfully predicted the theory-defined regime of the blob, which agrees with the results from a traditional method.

    Now that they have shown the success of using synthetic data and computer vision models for tracking blobs, the researchers plan to apply these techniques to other problems in fusion research, such as estimating particle transport at the boundary of a plasma, Han says.

    They also made the dataset and models publicly available, and look forward to seeing how other research groups apply these tools to study the dynamics of blobs, says Drori.

    “Prior to this, there was a barrier to entry that mostly the only people working on this problem were plasma physicists, who had the datasets and were using their methods. There is a huge machine-learning and computer-vision community. One goal of this work is to encourage participation in fusion research from the broader machine-learning community toward the broader goal of helping solve the critical problem of climate change,” he adds.

    This research is supported, in part, by the U.S. Department of Energy and the Swiss National Science Foundation. More

  • in

    Scientists chart how exercise affects the body

    Exercise is well-known to help people lose weight and avoid gaining it. However, identifying the cellular mechanisms that underlie this process has proven difficult because so many cells and tissues are involved.

    In a new study in mice that expands researchers’ understanding of how exercise and diet affect the body, MIT and Harvard Medical School researchers have mapped out many of the cells, genes, and cellular pathways that are modified by exercise or high-fat diet. The findings could offer potential targets for drugs that could help to enhance or mimic the benefits of exercise, the researchers say.

    “It is extremely important to understand the molecular mechanisms that are drivers of the beneficial effects of exercise and the detrimental effects of a high-fat diet, so that we can understand how we can intervene, and develop drugs that mimic the impact of exercise across multiple tissues,” says Manolis Kellis, a professor of computer science in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and a member of the Broad Institute of MIT and Harvard.

    The researchers studied mice with high-fat or normal diets, who were either sedentary or given the opportunity to exercise whenever they wanted. Using single-cell RNA sequencing, the researchers cataloged the responses of 53 types of cells found in skeletal muscle and two types of fatty tissue.

    “One of the general points that we found in our study, which is overwhelmingly clear, is how high-fat diets push all of these cells and systems in one way, and exercise seems to be pushing them nearly all in the opposite way,” Kellis says. “It says that exercise can really have a major effect throughout the body.”

    Kellis and Laurie Goodyear, a professor of medicine at Harvard Medical School and senior investigator at the Joslin Diabetes Center, are the senior authors of the study, which appears today in the journal Cell Metabolism. Jiekun Yang, a research scientist in MIT CSAIL; Maria Vamvini, an instructor of medicine at the Joslin Diabetes Center; and Pasquale Nigro, an instructor of medicine at the Joslin Diabetes Center, are the lead authors of the paper.

    The risks of obesity

    Obesity is a growing health problem around the world. In the United States, more than 40 percent of the population is considered obese, and nearly 75 percent is overweight. Being overweight is a risk factor for many diseases, including heart disease, cancer, Alzheimer’s disease, and even infectious diseases such as Covid-19.

    “Obesity, along with aging, is a global factor that contributes to every aspect of human health,” Kellis says.

    Several years ago, his lab performed a study on the FTO gene region, which has been strongly linked to obesity risk. In that 2015 study, the research team found that genes in this region control a pathway that prompts immature fat cells called progenitor adipocytes to either become fat-burning cells or fat-storing cells.

    That finding, which demonstrated a clear genetic component to obesity, motivated Kellis to begin looking at how exercise, a well-known behavioral intervention that can prevent obesity, might act on progenitor adipocytes at the cellular level.

    To explore that question, Kellis and his colleagues decided to perform single-cell RNA sequencing of three types of tissue — skeletal muscle, visceral white adipose tissue (found packed around internal organs, where it stores fat), and subcutaneous white adipose tissue (which is found under the skin and primarily burns fat).

    These tissues came from mice from four different experimental groups. For three weeks, two groups of mice were fed either a normal diet or a high-fat diet. For the next three weeks, each of those two groups were further divided into a sedentary group and an exercise group, which had continuous access to a treadmill.

    By analyzing tissues from those mice, the researchers were able to comprehensively catalog the genes that were activated or suppressed by exercise in 53 different cell types.

    The researchers found that in all three tissue types, mesenchymal stem cells (MSCs) appeared to control many of the diet and exercise-induced effects that they observed. MSCs are stem cells that can differentiate into other cell types, including fat cells and fibroblasts. In adipose tissue, the researchers found that a high-fat diet modulated MSCs’ capacity to differentiate into fat-storing cells, while exercise reversed this effect.

    In addition to promoting fat storage, the researchers found that a high-fat diet also stimulated MSCs to secrete factors that remodel the extracellular matrix (ECM) — a network of proteins and other molecules that surround and support cells and tissues in the body. This ECM remodeling helps provide structure for enlarged fat-storing cells and also creates a more inflammatory environment.

    “As the adipocytes become overloaded with lipids, there’s an extreme amount of stress, and that causes low-grade inflammation, which is systemic and preserved for a long time,” Kellis says. “That is one of the factors that is contributing to many of the adverse effects of obesity.”

    Circadian effects

    The researchers also found that high-fat diets and exercise had opposing effects on cellular pathways that control circadian rhythms — the 24-hour cycles that govern many functions, from sleep to body temperature, hormone release, and digestion. The study revealed that exercise boosts the expression of genes that regulate these rhythms, while a high-fat diet suppresses them.

    “There have been a lot of studies showing that when you eat during the day is extremely important in how you absorb the calories,” Kellis says. “The circadian rhythm connection is a very important one, and shows how obesity and exercise are in fact directly impacting that circadian rhythm in peripheral organs, which could act systemically on distal clocks and regulate stem cell functions and immunity.”

    The researchers then compared their results to a database of human genes that have been linked with metabolic traits. They found that two of the circadian rhythm genes they identified in this study, known as DBP and CDKN1A, have genetic variants that have been associated with a higher risk of obesity in humans.

    “These results help us see the translational values of these targets, and how we could potentially target specific biological processes in specific cell types,” Yang says.

    The researchers are now analyzing samples of small intestine, liver, and brain tissue from the mice in this study, to explore the effects of exercise and high-fat diets on those tissues. They are also conducting work with human volunteers to sample blood and biopsies and study similarities and differences between human and mouse physiology. They hope that their findings will help guide drug developers in designing drugs that might mimic some of the beneficial effects of exercise.

    “The message for everyone should be, eat a healthy diet and exercise if possible,” Kellis says. “For those for whom this is not possible, due to low access to healthy foods, or due to disabilities or other factors that prevent exercise, or simply lack of time to have a healthy diet or a healthy lifestyle, what this study says is that we now have a better handle on the pathways, the specific genes, and the specific molecular and cellular processes that we should be manipulating therapeutically.”

    The research was funded by the National Institutes of Health and the Novo Nordisk Research Center in Seattle. More

  • in

    Taking a magnifying glass to data center operations

    When the MIT Lincoln Laboratory Supercomputing Center (LLSC) unveiled its TX-GAIA supercomputer in 2019, it provided the MIT community a powerful new resource for applying artificial intelligence to their research. Anyone at MIT can submit a job to the system, which churns through trillions of operations per second to train models for diverse applications, such as spotting tumors in medical images, discovering new drugs, or modeling climate effects. But with this great power comes the great responsibility of managing and operating it in a sustainable manner — and the team is looking for ways to improve.

    “We have these powerful computational tools that let researchers build intricate models to solve problems, but they can essentially be used as black boxes. What gets lost in there is whether we are actually using the hardware as effectively as we can,” says Siddharth Samsi, a research scientist in the LLSC. 

    To gain insight into this challenge, the LLSC has been collecting detailed data on TX-GAIA usage over the past year. More than a million user jobs later, the team has released the dataset open source to the computing community.

    Their goal is to empower computer scientists and data center operators to better understand avenues for data center optimization — an important task as processing needs continue to grow. They also see potential for leveraging AI in the data center itself, by using the data to develop models for predicting failure points, optimizing job scheduling, and improving energy efficiency. While cloud providers are actively working on optimizing their data centers, they do not often make their data or models available for the broader high-performance computing (HPC) community to leverage. The release of this dataset and associated code seeks to fill this space.

    “Data centers are changing. We have an explosion of hardware platforms, the types of workloads are evolving, and the types of people who are using data centers is changing,” says Vijay Gadepally, a senior researcher at the LLSC. “Until now, there hasn’t been a great way to analyze the impact to data centers. We see this research and dataset as a big step toward coming up with a principled approach to understanding how these variables interact with each other and then applying AI for insights and improvements.”

    Papers describing the dataset and potential applications have been accepted to a number of venues, including the IEEE International Symposium on High-Performance Computer Architecture, the IEEE International Parallel and Distributed Processing Symposium, the Annual Conference of the North American Chapter of the Association for Computational Linguistics, the IEEE High-Performance and Embedded Computing Conference, and International Conference for High Performance Computing, Networking, Storage and Analysis. 

    Workload classification

    Among the world’s TOP500 supercomputers, TX-GAIA combines traditional computing hardware (central processing units, or CPUs) with nearly 900 graphics processing unit (GPU) accelerators. These NVIDIA GPUs are specialized for deep learning, the class of AI that has given rise to speech recognition and computer vision.

    The dataset covers CPU, GPU, and memory usage by job; scheduling logs; and physical monitoring data. Compared to similar datasets, such as those from Google and Microsoft, the LLSC dataset offers “labeled data, a variety of known AI workloads, and more detailed time series data compared with prior datasets. To our knowledge, it’s one of the most comprehensive and fine-grained datasets available,” Gadepally says. 

    Notably, the team collected time-series data at an unprecedented level of detail: 100-millisecond intervals on every GPU and 10-second intervals on every CPU, as the machines processed more than 3,000 known deep-learning jobs. One of the first goals is to use this labeled dataset to characterize the workloads that different types of deep-learning jobs place on the system. This process would extract features that reveal differences in how the hardware processes natural language models versus image classification or materials design models, for example.   

    The team has now launched the MIT Datacenter Challenge to mobilize this research. The challenge invites researchers to use AI techniques to identify with 95 percent accuracy the type of job that was run, using their labeled time-series data as ground truth.

    Such insights could enable data centers to better match a user’s job request with the hardware best suited for it, potentially conserving energy and improving system performance. Classifying workloads could also allow operators to quickly notice discrepancies resulting from hardware failures, inefficient data access patterns, or unauthorized usage.

    Too many choices

    Today, the LLSC offers tools that let users submit their job and select the processors they want to use, “but it’s a lot of guesswork on the part of users,” Samsi says. “Somebody might want to use the latest GPU, but maybe their computation doesn’t actually need it and they could get just as impressive results on CPUs, or lower-powered machines.”

    Professor Devesh Tiwari at Northeastern University is working with the LLSC team to develop techniques that can help users match their workloads to appropriate hardware. Tiwari explains that the emergence of different types of AI accelerators, GPUs, and CPUs has left users suffering from too many choices. Without the right tools to take advantage of this heterogeneity, they are missing out on the benefits: better performance, lower costs, and greater productivity.

    “We are fixing this very capability gap — making users more productive and helping users do science better and faster without worrying about managing heterogeneous hardware,” says Tiwari. “My PhD student, Baolin Li, is building new capabilities and tools to help HPC users leverage heterogeneity near-optimally without user intervention, using techniques grounded in Bayesian optimization and other learning-based optimization methods. But, this is just the beginning. We are looking into ways to introduce heterogeneity in our data centers in a principled approach to help our users achieve the maximum advantage of heterogeneity autonomously and cost-effectively.”

    Workload classification is the first of many problems to be posed through the Datacenter Challenge. Others include developing AI techniques to predict job failures, conserve energy, or create job scheduling approaches that improve data center cooling efficiencies.

    Energy conservation 

    To mobilize research into greener computing, the team is also planning to release an environmental dataset of TX-GAIA operations, containing rack temperature, power consumption, and other relevant data.

    According to the researchers, huge opportunities exist to improve the power efficiency of HPC systems being used for AI processing. As one example, recent work in the LLSC determined that simple hardware tuning, such as limiting the amount of power an individual GPU can draw, could reduce the energy cost of training an AI model by 20 percent, with only modest increases in computing time. “This reduction translates to approximately an entire week’s worth of household energy for a mere three-hour time increase,” Gadepally says.

    They have also been developing techniques to predict model accuracy, so that users can quickly terminate experiments that are unlikely to yield meaningful results, saving energy. The Datacenter Challenge will share relevant data to enable researchers to explore other opportunities to conserve energy.

    The team expects that lessons learned from this research can be applied to the thousands of data centers operated by the U.S. Department of Defense. The U.S. Air Force is a sponsor of this work, which is being conducted under the USAF-MIT AI Accelerator.

    Other collaborators include researchers at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Professor Charles Leiserson’s Supertech Research Group is investigating performance-enhancing techniques for parallel computing, and research scientist Neil Thompson is designing studies on ways to nudge data center users toward climate-friendly behavior.

    Samsi presented this work at the inaugural AI for Datacenter Optimization (ADOPT’22) workshop last spring as part of the IEEE International Parallel and Distributed Processing Symposium. The workshop officially introduced their Datacenter Challenge to the HPC community.

    “We hope this research will allow us and others who run supercomputing centers to be more responsive to user needs while also reducing the energy consumption at the center level,” Samsi says. More

  • in

    New program bolsters innovation in next-generation artificial intelligence hardware

    The MIT AI Hardware Program is a new academia and industry collaboration aimed at defining and developing translational technologies in hardware and software for the AI and quantum age. A collaboration between the MIT School of Engineering and MIT Schwarzman College of Computing, involving the Microsystems Technologies Laboratories and programs and units in the college, the cross-disciplinary effort aims to innovate technologies that will deliver enhanced energy efficiency systems for cloud and edge computing.

    “A sharp focus on AI hardware manufacturing, research, and design is critical to meet the demands of the world’s evolving devices, architectures, and systems,” says Anantha Chandrakasan, dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. “Knowledge-sharing between industry and academia is imperative to the future of high-performance computing.”

    Based on use-inspired research involving materials, devices, circuits, algorithms, and software, the MIT AI Hardware Program convenes researchers from MIT and industry to facilitate the transition of fundamental knowledge to real-world technological solutions. The program spans materials and devices, as well as architecture and algorithms enabling energy-efficient and sustainable high-performance computing.

    “As AI systems become more sophisticated, new solutions are sorely needed to enable more advanced applications and deliver greater performance,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “Our aim is to devise real-world technological solutions and lead the development of technologies for AI in hardware and software.”

    The inaugural members of the program are companies from a wide range of industries including chip-making, semiconductor manufacturing equipment, AI and computing services, and information systems R&D organizations. The companies represent a diverse ecosystem, both nationally and internationally, and will work with MIT faculty and students to help shape a vibrant future for our planet through cutting-edge AI hardware research.

    The five inaugural members of the MIT AI Hardware Program are:  

    Amazon, a global technology company whose hardware inventions include the Kindle, Amazon Echo, Fire TV, and Astro; 
    Analog Devices, a global leader in the design and manufacturing of analog, mixed signal, and DSP integrated circuits; 
    ASML, an innovation leader in the semiconductor industry, providing chipmakers with hardware, software, and services to mass produce patterns on silicon through lithography; 
    NTT Research, a subsidiary of NTT that conducts fundamental research to upgrade reality in game-changing ways that improve lives and brighten our global future; and 
    TSMC, the world’s leading dedicated semiconductor foundry.

    The MIT AI Hardware Program will create a roadmap of transformative AI hardware technologies. Leveraging MIT.nano, the most advanced university nanofabrication facility anywhere, the program will foster a unique environment for AI hardware research.  

    “We are all in awe at the seemingly superhuman capabilities of today’s AI systems. But this comes at a rapidly increasing and unsustainable energy cost,” says Jesús del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science. “Continued progress in AI will require new and vastly more energy-efficient systems. This, in turn, will demand innovations across the entire abstraction stack, from materials and devices to systems and software. The program is in a unique position to contribute to this quest.”

    The program will prioritize the following topics:

    analog neural networks;
    new roadmap CMOS designs;
    heterogeneous integration for AI systems;
    onolithic-3D AI systems;
    analog nonvolatile memory devices;
    software-hardware co-design;
    intelligence at the edge;
    intelligent sensors;
    energy-efficient AI;
    intelligent internet of things (IIoT);
    neuromorphic computing;
    AI edge security;
    quantum AI;
    wireless technologies;
    hybrid-cloud computing; and
    high-performance computation.

    “We live in an era where paradigm-shifting discoveries in hardware, systems communications, and computing have become mandatory to find sustainable solutions — solutions that we are proud to give to the world and generations to come,” says Aude Oliva, senior research scientist in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and director of strategic industry engagement in the MIT Schwarzman College of Computing.

    The new program is co-led by Jesús del Alamo and Aude Oliva, and Anantha Chandrakasan serves as chair. More