More stories

  • in

    Jackson Jewett wants to design buildings that use less concrete

    After three years leading biking tours through U.S. National Parks, Jackson Jewett decided it was time for a change.

    “It was a lot of fun, but I realized I missed buildings,” says Jewett. “I really wanted to be a part of that industry, learn more about it, and reconnect with my roots in the built environment.”

    Jewett grew up in California in what he describes as a “very creative household.”

    “I remember making very elaborate Halloween costumes with my parents, making fun dioramas for school projects, and building forts in the backyard, that kind of thing,” Jewett explains.

    Both of his parents have backgrounds in design; his mother studied art in college and his father is a practicing architect. From a young age, Jewett was interested in following in his father’s footsteps. But when he arrived at the University of California at Berkeley in the midst of the 2009 housing crash, it didn’t seem like the right time. Jewett graduated with a degree in cognitive science and a minor in history of architecture. And even as he led tours through Yellowstone, the Grand Canyon, and other parks, buildings were in the back of his mind.

    It wasn’t just the built environment that Jewett was missing. He also longed for the rigor and structure of an academic environment.

    Jewett arrived at MIT in 2017, initially only planning on completing the master’s program in civil and environmental engineering. It was then that he first met Josephine Carstensen, a newly hired lecturer in the department. Jewett was interested in Carstensen’s work on “topology optimization,” which uses algorithms to design structures that can achieve their performance requirements while using only a limited amount of material. He was particularly interested in applying this approach to concrete design, and he collaborated with Carstensen to help demonstrate its viability.

    After earning his master’s, Jewett spent a year and a half as a structural engineer in New York City. But when Carstensen was hired as a professor, she reached out to Jewett about joining her lab as a PhD student. He was ready for another change.

    Now in the third year of his PhD program, Jewett’s dissertation work builds upon his master’s thesis to further refine algorithms that can design building-scale concrete structures that use less material, which would help lower carbon emissions from the construction industry. It is estimated that the concrete industry alone is responsible for 8 percent of global carbon emissions, so any efforts to reduce that number could help in the fight against climate change.

    Implementing new ideas

    Topology optimization is a small field, with the bulk of the prior work being computational without any experimental verification. The work Jewett completed for his master’s thesis was just the start of a long learning process.

    “I do feel like I’m just getting to the part where I can start implementing my own ideas without as much support as I’ve needed in the past,” says Jewett. “In the last couple of months, I’ve been working on a reinforced concrete optimization algorithm that I hope will be the cornerstone of my thesis.”

    The process of fine-tuning a generative algorithm is slow going, particularly when tackling a multifaceted problem.

    “It can take days or usually weeks to take a step toward making it work as an entire integrated system,” says Jewett. “The days when that breakthrough happens and I can see the algorithm converging on a solution that makes sense — those are really exciting moments.”

    By harnessing computational power, Jewett is searching for materially efficient components that can be used to make up structures such as bridges or buildings. These are other constraints to consider as well, particularly ensuring that the cost of manufacturing isn’t too high. Having worked in the industry before starting the PhD program, Jewett has an eye toward doing work that can be feasibly implemented.

    Inspiring others

    When Jewett first visited MIT campus, he was drawn in by the collaborative environment of the institute and the students’ drive to learn. Now, he’s a part of that process as a teaching assistant and a supervisor in the Undergraduate Research Opportunities Program.  

    Working as a teaching assistant isn’t a requirement for Jewett’s program, but it’s been one of his favorite parts of his time at MIT.

    “The MIT undergrads are so gifted and just constantly impress me,” says Jewett. “Being able to teach, especially in the context of what MIT values is a lot of fun. And I learn, too. My coding practices have gotten so much better since working with undergrads here.”

    Jewett’s experiences have inspired him to pursue a career in academia after the completion of his program, which he expects to complete in the spring of 2025. But he’s making sure to take care of himself along the way. He still finds time to plan cycling trips with his friends and has gotten into running ever since moving to Boston. So far, he’s completed two marathons.

    “It’s so inspiring to be in a place where so many good ideas are just bouncing back and forth all over campus,” says Jewett. “And on most days, I remember that and it inspires me. But it’s also the case that academics is hard, PhD programs are hard, and MIT — there’s pressure being here, and sometimes that pressure can feel like it’s working against you.”

    Jewett is grateful for the mental health resources that MIT provides students. While he says it can be imperfect, it’s been a crucial part of his journey.

    “My PhD thesis will be done in 2025, but the work won’t be done. The time horizon of when these things need to be implemented is relatively short if we want to make an impact before global temperatures have already risen too high. My PhD research will be developing a framework for how that could be done with concrete construction, but I’d like to keep thinking about other materials and construction methods even after this project is finished.” More

  • in

    Fast-tracking fusion energy’s arrival with AI and accessibility

    As the impacts of climate change continue to grow, so does interest in fusion’s potential as a clean energy source. While fusion reactions have been studied in laboratories since the 1930s, there are still many critical questions scientists must answer to make fusion power a reality, and time is of the essence. As part of their strategy to accelerate fusion energy’s arrival and reach carbon neutrality by 2050, the U.S. Department of Energy (DoE) has announced new funding for a project led by researchers at MIT’s Plasma Science and Fusion Center (PSFC) and four collaborating institutions.

    Cristina Rea, a research scientist and group leader at the PSFC, will serve as the primary investigator for the newly funded three-year collaboration to pilot the integration of fusion data into a system that can be read by AI-powered tools. The PSFC, together with scientists from the College of William and Mary, the University of Wisconsin at Madison, Auburn University, and the nonprofit HDF Group, plan to create a holistic fusion data platform, the elements of which could offer unprecedented access for researchers, especially underrepresented students. The project aims to encourage diverse participation in fusion and data science, both in academia and the workforce, through outreach programs led by the group’s co-investigators, of whom four out of five are women. 

    The DoE’s award, part of a $29 million funding package for seven projects across 19 institutions, will support the group’s efforts to distribute data produced by fusion devices like the PSFC’s Alcator C-Mod, a donut-shaped “tokamak” that utilized powerful magnets to control and confine fusion reactions. Alcator C-Mod operated from 1991 to 2016 and its data are still being studied, thanks in part to the PSFC’s commitment to the free exchange of knowledge.

    Currently, there are nearly 50 public experimental magnetic confinement-type fusion devices; however, both historical and current data from these devices can be difficult to access. Some fusion databases require signing user agreements, and not all data are catalogued and organized the same way. Moreover, it can be difficult to leverage machine learning, a class of AI tools, for data analysis and to enable scientific discovery without time-consuming data reorganization. The result is fewer scientists working on fusion, greater barriers to discovery, and a bottleneck in harnessing AI to accelerate progress.

    The project’s proposed data platform addresses technical barriers by being FAIR — Findable, Interoperable, Accessible, Reusable — and by adhering to UNESCO’s Open Science (OS) recommendations to improve the transparency and inclusivity of science; all of the researchers’ deliverables will adhere to FAIR and OS principles, as required by the DoE. The platform’s databases will be built using MDSplusML, an upgraded version of the MDSplus open-source software developed by PSFC researchers in the 1980s to catalogue the results of Alcator C-Mod’s experiments. Today, nearly 40 fusion research institutes use MDSplus to store and provide external access to their fusion data. The release of MDSplusML aims to continue that legacy of open collaboration.

    The researchers intend to address barriers to participation for women and disadvantaged groups not only by improving general access to fusion data, but also through a subsidized summer school that will focus on topics at the intersection of fusion and machine learning, which will be held at William and Mary for the next three years.

    Of the importance of their research, Rea says, “This project is about responding to the fusion community’s needs and setting ourselves up for success. Scientific advancements in fusion are enabled via multidisciplinary collaboration and cross-pollination, so accessibility is absolutely essential. I think we all understand now that diverse communities have more diverse ideas, and they allow faster problem-solving.”

    The collaboration’s work also aligns with vital areas of research identified in the International Atomic Energy Agency’s “AI for Fusion” Coordinated Research Project (CRP). Rea was selected as the technical coordinator for the IAEA’s CRP emphasizing community engagement and knowledge access to accelerate fusion research and development. In a letter of support written for the group’s proposed project, the IAEA stated that, “the work [the researchers] will carry out […] will be beneficial not only to our CRP but also to the international fusion community in large.”

    PSFC Director and Hitachi America Professor of Engineering Dennis Whyte adds, “I am thrilled to see PSFC and our collaborators be at the forefront of applying new AI tools while simultaneously encouraging and enabling extraction of critical data from our experiments.”

    “Having the opportunity to lead such an important project is extremely meaningful, and I feel a responsibility to show that women are leaders in STEM,” says Rea. “We have an incredible team, strongly motivated to improve our fusion ecosystem and to contribute to making fusion energy a reality.” More

  • in

    Supporting sustainability, digital health, and the future of work

    The MIT and Accenture Convergence Initiative for Industry and Technology has selected three new research projects that will receive support from the initiative. The research projects aim to accelerate progress in meeting complex societal needs through new business convergence insights in technology and innovation.

    Established in MIT’s School of Engineering and now in its third year, the MIT and Accenture Convergence Initiative is furthering its mission to bring together technological experts from across business and academia to share insights and learn from one another. Recently, Thomas W. Malone, the Patrick J. McGovern (1959) Professor of Management, joined the initiative as its first-ever faculty lead. The research projects relate to three of the initiative’s key focus areas: sustainability, digital health, and the future of work.

    “The solutions these research teams are developing have the potential to have tremendous impact,” says Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “They embody the initiative’s focus on advancing data-driven research that addresses technology and industry convergence.”

    “The convergence of science and technology driven by advancements in generative AI, digital twins, quantum computing, and other technologies makes this an especially exciting time for Accenture and MIT to be undertaking this joint research,” says Kenneth Munie, senior managing director at Accenture Strategy, Life Sciences. “Our three new research projects focusing on sustainability, digital health, and the future of work have the potential to help guide and shape future innovations that will benefit the way we work and live.”

    The MIT and Accenture Convergence Initiative charter project researchers are described below.

    Accelerating the journey to net zero with industrial clusters

    Jessika Trancik is a professor at the Institute for Data, Systems, and Society (IDSS). Trancik’s research examines the dynamic costs, performance, and environmental impacts of energy systems to inform climate policy and accelerate beneficial and equitable technology innovation. Trancik’s project aims to identify how industrial clusters can enable companies to derive greater value from decarbonization, potentially making companies more willing to invest in the clean energy transition.

    To meet the ambitious climate goals that have been set by countries around the world, rising greenhouse gas emissions trends must be rapidly reversed. Industrial clusters — geographically co-located or otherwise-aligned groups of companies representing one or more industries — account for a significant portion of greenhouse gas emissions globally. With major energy consumers “clustered” in proximity, industrial clusters provide a potential platform to scale low-carbon solutions by enabling the aggregation of demand and the coordinated investment in physical energy supply infrastructure.

    In addition to Trancik, the research team working on this project will include Aliza Khurram, a postdoc in IDSS; Micah Ziegler, an IDSS research scientist; Melissa Stark, global energy transition services lead at Accenture; Laura Sanderfer, strategy consulting manager at Accenture; and Maria De Miguel, strategy senior analyst at Accenture.

    Eliminating childhood obesity

    Anette “Peko” Hosoi is the Neil and Jane Pappalardo Professor of Mechanical Engineering. A common theme in her work is the fundamental study of shape, kinematic, and rheological optimization of biological systems with applications to the emergent field of soft robotics. Her project will use both data from existing studies and synthetic data to create a return-on-investment (ROI) calculator for childhood obesity interventions so that companies can identify earlier returns on their investment beyond reduced health-care costs.

    Childhood obesity is too prevalent to be solved by a single company, industry, drug, application, or program. In addition to the physical and emotional impact on children, society bears a cost through excess health care spending, lost workforce productivity, poor school performance, and increased family trauma. Meaningful solutions require multiple organizations, representing different parts of society, working together with a common understanding of the problem, the economic benefits, and the return on investment. ROI is particularly difficult to defend for any single organization because investment and return can be separated by many years and involve asymmetric investments, returns, and allocation of risk. Hosoi’s project will consider the incentives for a particular entity to invest in programs in order to reduce childhood obesity.

    Hosoi will be joined by graduate students Pragya Neupane and Rachael Kha, both of IDSS, as well a team from Accenture that includes Kenneth Munie, senior managing director at Accenture Strategy, Life Sciences; Kaveh Safavi, senior managing director in Accenture Health Industry; and Elizabeth Naik, global health and public service research lead.

    Generating innovative organizational configurations and algorithms for dealing with the problem of post-pandemic employment

    Thomas Malone is the Patrick J. McGovern (1959) Professor of Management at the MIT Sloan School of Management and the founding director of the MIT Center for Collective Intelligence. His research focuses on how new organizations can be designed to take advantage of the possibilities provided by information technology. Malone will be joined in this project by John Horton, the Richard S. Leghorn (1939) Career Development Professor at the MIT Sloan School of Management, whose research focuses on the intersection of labor economics, market design, and information systems. Malone and Horton’s project will look to reshape the future of work with the help of lessons learned in the wake of the pandemic.

    The Covid-19 pandemic has been a major disrupter of work and employment, and it is not at all obvious how governments, businesses, and other organizations should manage the transition to a desirable state of employment as the pandemic recedes. Using natural language processing algorithms such as GPT-4, this project will look to identify new ways that companies can use AI to better match applicants to necessary jobs, create new types of jobs, assess skill training needed, and identify interventions to help include women and other groups whose employment was disproportionately affected by the pandemic.

    In addition to Malone and Horton, the research team will include Rob Laubacher, associate director and research scientist at the MIT Center for Collective Intelligence, and Kathleen Kennedy, executive director at the MIT Center for Collective Intelligence and senior director at MIT Horizon. The team will also include Nitu Nivedita, managing director of artificial intelligence at Accenture, and Thomas Hancock, data science senior manager at Accenture. More

  • in

    A new dataset of Arctic images will spur artificial intelligence research

    As the U.S. Coast Guard (USCG) icebreaker Healy takes part in a voyage across the North Pole this summer, it is capturing images of the Arctic to further the study of this rapidly changing region. Lincoln Laboratory researchers installed a camera system aboard the Healy while at port in Seattle before it embarked on a three-month science mission on July 11. The resulting dataset, which will be one of the first of its kind, will be used to develop artificial intelligence tools that can analyze Arctic imagery.

    “This dataset not only can help mariners navigate more safely and operate more efficiently, but also help protect our nation by providing critical maritime domain awareness and an improved understanding of how AI analysis can be brought to bear in this challenging and unique environment,” says Jo Kurucar, a researcher in Lincoln Laboratory’s AI Software Architectures and Algorithms Group, which led this project.

    As the planet warms and sea ice melts, Arctic passages are opening up to more traffic, both to military vessels and ships conducting illegal fishing. These movements may pose national security challenges to the United States. The opening Arctic also leaves questions about how its climate, wildlife, and geography are changing.

    Today, very few imagery datasets of the Arctic exist to study these changes. Overhead images from satellites or aircraft can only provide limited information about the environment. An outward-looking camera attached to a ship can capture more details of the setting and different angles of objects, such as other ships, in the scene. These types of images can then be used to train AI computer-vision tools, which can help the USCG plan naval missions and automate analysis. According to Kurucar, USCG assets in the Arctic are spread thin and can benefit greatly from AI tools, which can act as a force multiplier.

    The Healy is the USCG’s largest and most technologically advanced icebreaker. Given its current mission, it was a fitting candidate to be equipped with a new sensor to gather this dataset. The laboratory research team collaborated with the USCG Research and Development Center to determine the sensor requirements. Together, they developed the Cold Region Imaging and Surveillance Platform (CRISP).

    “Lincoln Laboratory has an excellent relationship with the Coast Guard, especially with the Research and Development Center. Over a decade, we’ve established ties that enabled the deployment of the CRISP system,” says Amna Greaves, the CRISP project lead and an assistant leader in the AI Software Architectures and Algorithms Group. “We have strong ties not only because of the USCG veterans working at the laboratory and in our group, but also because our technology missions are complementary. Today it was deploying infrared sensing in the Arctic; tomorrow it could be operating quadruped robot dogs on a fast-response cutter.”

    The CRISP system comprises a long-wave infrared camera, manufactured by Teledyne FLIR (for forward-looking infrared), that is designed for harsh maritime environments. The camera can stabilize itself during rough seas and image in complete darkness, fog, and glare. It is paired with a GPS-enabled time-synchronized clock and a network video recorder to record both video and still imagery along with GPS-positional data.  

    The camera is mounted at the front of the ship’s fly bridge, and the electronics are housed in a ruggedized rack on the bridge. The system can be operated manually from the bridge or be placed into an autonomous surveillance mode, in which it slowly pans back and forth, recording 15 minutes of video every three hours and a still image once every 15 seconds.

    “The installation of the equipment was a unique and fun experience. As with any good project, our expectations going into the install did not meet reality,” says Michael Emily, the project’s IT systems administrator who traveled to Seattle for the install. Working with the ship’s crew, the laboratory team had to quickly adjust their route for running cables from the camera to the observation station after they discovered that the expected access points weren’t in fact accessible. “We had 100-foot cables made for this project just in case of this type of scenario, which was a good thing because we only had a few inches to spare,” Emily says.

    The CRISP project team plans to publicly release the dataset, anticipated to be about 4 terabytes in size, once the USCG science mission concludes in the fall.

    The goal in releasing the dataset is to enable the wider research community to develop better tools for those operating in the Arctic, especially as this region becomes more navigable. “Collecting and publishing the data allows for faster and greater progress than what we could accomplish on our own,” Kurucar adds. “It also enables the laboratory to engage in more advanced AI applications while others make more incremental advances using the dataset.”

    On top of providing the dataset, the laboratory team plans to provide a baseline object-detection model, from which others can make progress on their own models. More advanced AI applications planned for development are classifiers for specific objects in the scene and the ability to identify and track objects across images.

    Beyond assisting with USCG missions, this project could create an influential dataset for researchers looking to apply AI to data from the Arctic to help combat climate change, says Paul Metzger, who leads the AI Software Architectures and Algorithms Group.

    Metzger adds that the group was honored to be a part of this project and is excited to see the advances that come from applying AI to novel challenges facing the United States: “I’m extremely proud of how our group applies AI to the highest-priority challenges in our nation, from predicting outbreaks of Covid-19 and assisting the U.S. European Command in their support of Ukraine to now employing AI in the Arctic for maritime awareness.”

    Once the dataset is available, it will be free to download on the Lincoln Laboratory dataset website. More

  • in

    Q&A: Gabriela Sá Pessoa on Brazilian politics, human rights in the Amazon, and AI

    Gabriela Sá Pessoa is a journalist passionate about the intersection of human rights and climate change. She came to MIT from The Washington Post, where she worked from her home country of Brazil as a news researcher reporting on the Amazon, human rights violations, and environmental crimes. Before that, she held roles at two of the most influential media outlets in Brazil: Folha de S.Paulo, covering local and national politics, and UOL, where she was assigned to coronavirus coverage and later joined the investigative desk.

    Sá Pessoa was awarded the 2023 Elizabeth Neuffer Fellowship by the International Women’s Media Foundation, which supports its recipient with research opportunities at MIT and further training at The Boston Globe and The New York Times. She is currently based at the MIT Center for International Studies. Recently, she sat down to talk about her work on the Amazon, recent changes in Brazilian politics, and her experience at MIT.

    Q: One focus of your reporting is human rights and environmental issues in the Amazon. As part of your fellowship, you contributed to a recent editorial in The Boston Globe on fighting deforestation in the region. Why is reporting on this topic important?

    A: For many Brazilians, the Amazon is a remote and distant territory, and people living in other parts of the country aren’t fully aware of all of its problems and all of its potential. This is similar to the United States — like many people here, they don’t see how they could be related to the human rights violations and the destruction of the rainforest that are happening.

    But, we are all complicit in the destruction in some ways because the economic forces driving the deforestation of the rainforest all have a market, and these markets are everywhere, in Brazil and here in the U.S. I think it is part of journalism to show people in the U.S., Brazil, and elsewhere that we are part of the problem, and as part of the problem, we should be part of the solution by being aware of it, caring about it, and taking actions that are within our power.

    In the U.S., for example, voters can influence policy like the current negotiations for financial support for fighting deforestation in the Amazon. And as consumers, we can be more aware — is the beef we are consuming related to deforestation? Is the timber on our construction sites coming from the Amazon?

    Truth is, in Brazil, we have turned our backs to the Amazon for so long. It’s our duty to protect it for the sake of climate change. If we don’t take care of it, there will be serious consequences to our local climate, our local communities, and for the whole world. It’s a huge matter of human rights because our living depends on that, both locally and globally.

    Q: Before coming to MIT, you were at The Washington Post in São Paulo, where you contributed to reporting on the recent presidential election. What changes do you expect to see with the new Lula administration?

    A: To climate and environment, the first signs were positive. But the optimism did not last a semester, as politics is imposing itself. Lula is facing increasing difficulty building a majority in a conservative Congress, over which agribusiness holds tremendous power and influence. As we speak, environmental policy is under Congress’s attack. A committee in the House has just passed a ruling drowning power from the environmental minister, Marina Silva, and from the recently created National Indigenous People Ministry, led by Sonia Guajajara. Both Marina and Sonia are global ecological and human rights champions, and I wonder what the impact would be if Congress ratifies these changes. It is still unclear how it would impact the efforts to fight deforestation.

    In addition, there is an internal dispute in the government between environmentalists and those in favor of mining and big infrastructure projects. Petrobras, the state-run oil company, is trying to get authorization to research and drill offshore oil reserves in the mouth of the Amazon River. The federal environmental protection agency did a conclusive report suspending the operation, saying it is critical and threatens the region’s sensitive environment and indigenous communities. And, of course, it would be another source of greenhouse gas emissions. ​

    That said, it’s not a denialist government. I should mention the quick response from the administration to the Yanomami genocide earlier this year. In January, an independent media organization named Sumaúma reported on the deaths of over five hundred indigenous children from the Yanomami community in the Amazon over the past four years. This was a huge shock in Brazil, and the administration responded immediately. They sent task forces to the region and are now expelling the illegal miners that were bringing diseases and were ultimately responsible for these humanitarian tragedies. To be clear: It is still a problem. It’s not solved. But this is already a good example of positive action.

    Fighting deforestation in the Amazon and the Cerrado, another biome critical to climate regulation in Brazil, will not be easy. Rebuilding the environmental policy will take time, and the agencies responsible for enforcement are understaffed. In addition, environmental crime has become more sophisticated, connecting with other major criminal organizations in the country. In April, for the first time, there was a reduction in deforestation in the Amazon after two consecutive months of higher numbers. These are still preliminary data, and it is still too early to confirm whether they signal a turning point and may indicate a tendency for deforestation to decrease. On the other hand, the Cerrado registered record deforestation in April.

    There are problems everywhere in the economy and politics that Lula will have to face. In the first week of the new term, on Jan. 8, we saw an insurrection in Brasília, the country’s capital, from Bolsonaro voters who wouldn’t accept the election results. The events resembled what Americans saw in the Capitol attacks in 2021. We also seem to have imported problems from the United States, like mass killings in schools. We never used to have them in Brazil, but we are seeing them now. I’m curious to see how the country will address those problems and if the U.S. can also inspire solutions to that. That’s something I’m thinking about, being here: Are there solutions here? What are they?

    Q: What have you learned so far from MIT and your fellowship?

    A: It’s hard to put everything into words! I’m mostly taking courses and attending lectures on pressing issues to humanity, like existential threats such as climate change, artificial intelligence, biosecurity, and more.

    I’m learning about all these issues, but also, as a journalist, I think that I’m learning more about how I can incorporate the scientific approach into my work; for example, being more pro-positive. I am already a rigorous journalist, but I am thinking about how I can be more rigorous and more transparent about my methods. Being in the academic and scientific environment is inspiring that way.

    I am also learning a lot about how to cover scientific topics and thinking about how technology can offer us solutions (and problems). I’m learning so much that I think I will need some time to digest and fully understand what this period means for me!

    Q: You mentioned artificial intelligence. Would you like to weigh in on this subject and what you have been learning?

    A: It has been a particularly good semester to be at MIT. Generative artificial intelligence, which became more popular after ChatGPT, has been a topic of intense discussion this semester, and I was able to attend many classes, seminars, and events about AI here, especially from a policy perspective.

    Algorithms have influenced the economy, society, and public health for many years. It has had great outcomes, but also injustice. Popular systems like ChatGPT have made this technology incredibly popular and accessible, even for those with no computer knowledge. This is scary and, at the same time, very exciting. Here, I learned that we need guardrails for artificial intelligence, just like other technologies. Think of the pharmaceutical or automobile industries, which have to meet safety criteria before putting a new product on the market. But with artificial intelligence, it’s going to be different; supply chains are very complex and sometimes not very transparent, and the speed at which new resources develop is so fast that it challenges the policymaker’s ability to respond.

    Artificial intelligence is changing the world radically. It’s exciting to have the privilege of being here and seeing these discussions take place. After all, I have a future to report on. At least, I hope so!

    Q: What are you working on going forward?

    A: After MIT, I am going to New York, where I’ll be working with The New York Times in their internship program. I’m really excited about that because it will be a different pace from MIT. I am also doing research on carbon credit markets and hope to continue that project, either in a reporting or academic environment. 

    Honestly, I feel inspired to keep studying. I would love to spend more time here at MIT. I would love to do a master’s or join any program here. I’m going to work on coming back to academia because I think that I need to learn more from the academic environment. I hope that it’s at MIT because honestly, it’s the most exciting environment that I’ve ever been in, with all the people here from different fields and different backgrounds. I’m not a scientist, but it’s inspiring to be with them, and if there’s a way that I could contribute to their work in a way that they’re contributing to my work, I’ll be thrilled to spend more time here. More

  • in

    Inaugural J-WAFS Grand Challenge aims to develop enhanced crop variants and move them from lab to land

    According to MIT’s charter, established in 1861, part of the Institute’s mission is to advance the “development and practical application of science in connection with arts, agriculture, manufactures, and commerce.” Today, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) is one of the driving forces behind water and food-related research on campus, much of which relates to agriculture. In 2022, J-WAFS established the Water and Food Grand Challenge Grant to inspire MIT researchers to work toward a water-secure and food-secure future for our changing planet. Not unlike MIT’s Climate Grand Challenges, the J-WAFS Grand Challenge seeks to leverage multiple areas of expertise, programs, and Institute resources. The initial call for statements of interests returned 23 letters from MIT researchers spanning 18 departments, labs, and centers. J-WAFS hosted workshops for the proposers to present and discuss their initial ideas. These were winnowed down to a smaller set of invited concept papers, followed by the final proposal stage. 

    Today, J-WAFS is delighted to report that the inaugural J-WAFS Grand Challenge Grant has been awarded to a team of researchers led by Professor Matt Shoulders and research scientist Robert Wilson of the Department of Chemistry. A panel of expert, external reviewers highly endorsed their proposal, which tackles a longstanding problem in crop biology — how to make photosynthesis more efficient. The team will receive $1.5 million over three years to facilitate a multistage research project that combines cutting-edge innovations in synthetic and computational biology. If successful, this project could create major benefits for agriculture and food systems worldwide.

    “Food systems are a major source of global greenhouse gas emissions, and they are also increasingly vulnerable to the impacts of climate change. That’s why when we talk about climate change, we have to talk about food systems, and vice versa,” says Maria T. Zuber, MIT’s vice president for research. “J-WAFS is central to MIT’s efforts to address the interlocking challenges of climate, water, and food. This new grant program aims to catalyze innovative projects that will have real and meaningful impacts on water and food. I congratulate Professor Shoulders and the rest of the research team on being the inaugural recipients of this grant.”

    Shoulders will work with Bryan Bryson, associate professor of biological engineering, as well as Bin Zhang, associate professor of chemistry, and Mary Gehring, a professor in the Department of Biology and the Whitehead Institute for Biomedical Research. Robert Wilson from the Shoulders lab will be coordinating the research effort. The team at MIT will work with outside collaborators Spencer Whitney, a professor from the Australian National University, and Ahmed Badran, an assistant professor at the Scripps Research Institute. A milestone-based collaboration will also take place with Stephen Long, a professor from the University of Illinois at Urbana-Champaign. The group consists of experts in continuous directed evolution, machine learning, molecular dynamics simulations, translational plant biochemistry, and field trials.

    “This project seeks to fundamentally improve the RuBisCO enzyme that plants use to convert carbon dioxide into the energy-rich molecules that constitute our food,” says J-WAFS Director John H. Lienhard V. “This difficult problem is a true grand challenge, calling for extensive resources. With J-WAFS’ support, this long-sought goal may finally be achieved through MIT’s leading-edge research,” he adds.

    RuBisCO: No, it’s not a new breakfast cereal; it just might be the key to an agricultural revolution

    A growing global population, the effects of climate change, and social and political conflicts like the war in Ukraine are all threatening food supplies, particularly grain crops. Current projections estimate that crop production must increase by at least 50 percent over the next 30 years to meet food demands. One key barrier to increased crop yields is a photosynthetic enzyme called Ribulose-1,5-Bisphosphate Carboxylase/Oxygenase (RuBisCO). During photosynthesis, crops use energy gathered from light to draw carbon dioxide (CO2) from the atmosphere and transform it into sugars and cellulose for growth, a process known as carbon fixation. RuBisCO is essential for capturing the CO2 from the air to initiate conversion of CO2 into energy-rich molecules like glucose. This reaction occurs during the second stage of photosynthesis, also known as the Calvin cycle. Without RuBisCO, the chemical reactions that account for virtually all carbon acquisition in life could not occur.

    Unfortunately, RuBisCO has biochemical shortcomings. Notably, the enzyme acts slowly. Many other enzymes can process a thousand molecules per second, but RuBisCO in chloroplasts fixes less than six carbon dioxide molecules per second, often limiting the rate of plant photosynthesis. Another problem is that oxygen (O2) molecules and carbon dioxide molecules are relatively similar in shape and chemical properties, and RuBisCO is unable to fully discriminate between the two. The inadvertent fixation of oxygen by RuBisCO leads to energy and carbon loss. What’s more, at higher temperatures RuBisCO reacts even more frequently with oxygen, which will contribute to decreased photosynthetic efficiency in many staple crops as our climate warms.

    The scientific consensus is that genetic engineering and synthetic biology approaches could revolutionize photosynthesis and offer protection against crop losses. To date, crop RuBisCO engineering has been impaired by technological obstacles that have limited any success in significantly enhancing crop production. Excitingly, genetic engineering and synthetic biology tools are now at a point where they can be applied and tested with the aim of creating crops with new or improved biological pathways for producing more food for the growing population.

    An epic plan for fighting food insecurity

    The 2023 J-WAFS Grand Challenge project will use state-of-the-art, transformative protein engineering techniques drawn from biomedicine to improve the biochemistry of photosynthesis, specifically focusing on RuBisCO. Shoulders and his team are planning to build what they call the Enhanced Photosynthesis in Crops (EPiC) platform. The project will evolve and design better crop RuBisCO in the laboratory, followed by validation of the improved enzymes in plants, ultimately resulting in the deployment of enhanced RuBisCO in field trials to evaluate the impact on crop yield. 

    Several recent developments make high-throughput engineering of crop RuBisCO possible. RuBisCO requires a complex chaperone network for proper assembly and function in plants. Chaperones are like helpers that guide proteins during their maturation process, shielding them from aggregation while coordinating their correct assembly. Wilson and his collaborators previously unlocked the ability to recombinantly produce plant RuBisCO outside of plant chloroplasts by reconstructing this chaperone network in Escherichia coli (E. coli). Whitney has now established that the RuBisCO enzymes from a range of agriculturally relevant crops, including potato, carrot, strawberry, and tobacco, can also be expressed using this technology. Whitney and Wilson have further developed a range of RuBisCO-dependent E. coli screens that can identify improved RuBisCO from complex gene libraries. Moreover, Shoulders and his lab have developed sophisticated in vivo mutagenesis technologies that enable efficient continuous directed evolution campaigns. Continuous directed evolution refers to a protein engineering process that can accelerate the steps of natural evolution simultaneously in an uninterrupted cycle in the lab, allowing for rapid testing of protein sequences. While Shoulders and Badran both have prior experience with cutting-edge directed evolution platforms, this will be the first time directed evolution is applied to RuBisCO from plants.

    Artificial intelligence is changing the way enzyme engineering is undertaken by researchers. Principal investigators Zhang and Bryson will leverage modern computational methods to simulate the dynamics of RuBisCO structure and explore its evolutionary landscape. Specifically, Zhang will use molecular dynamics simulations to simulate and monitor the conformational dynamics of the atoms in a protein and its programmed environment over time. This approach will help the team evaluate the effect of mutations and new chemical functionalities on the properties of RuBisCO. Bryson will employ artificial intelligence and machine learning to search the RuBisCO activity landscape for optimal sequences. The computational and biological arms of the EPiC platform will work together to both validate and inform each other’s approaches to accelerate the overall engineering effort.

    Shoulders and the group will deploy their designed enzymes in tobacco plants to evaluate their effects on growth and yield relative to natural RuBisCO. Gehring, a plant biologist, will assist with screening improved RuBisCO variants using the tobacco variety Nicotiana benthamianaI, where transient expression can be deployed. Transient expression is a speedy approach to test whether novel engineered RuBisCO variants can be correctly synthesized in leaf chloroplasts. Variants that pass this quality-control checkpoint at MIT will be passed to the Whitney Lab at the Australian National University for stable transformation into Nicotiana tabacum (tobacco), enabling robust measurements of photosynthetic improvement. In a final step, Professor Long at the University of Illinois at Urbana-Champaign will perform field trials of the most promising variants.

    Even small improvements could have a big impact

    A common criticism of efforts to improve RuBisCO is that natural evolution has not already identified a better enzyme, possibly implying that none will be found. Traditional views have speculated a catalytic trade-off between RuBisCO’s specificity factor for CO2 / O2 versus its CO2 fixation efficiency, leading to the belief that specificity factor improvements might be offset by even slower carbon fixation or vice versa. This trade-off has been suggested to explain why natural evolution has been slow to achieve a better RuBisCO. But Shoulders and the team are convinced that the EPiC platform can unlock significant overall improvements to plant RuBisCO. This view is supported by the fact that Wilson and Whitney have previously used directed evolution to improve CO2 fixation efficiency by 50 percent in RuBisCO from cyanobacteria (the ancient progenitors of plant chloroplasts) while simultaneously increasing the specificity factor. 

    The EPiC researchers anticipate that their initial variants could yield 20 percent increases in RuBisCO’s specificity factor without impairing other aspects of catalysis. More sophisticated variants could lift RuBisCO out of its evolutionary trap and display attributes not currently observed in nature. “If we achieve anywhere close to such an improvement and it translates to crops, the results could help transform agriculture,” Shoulders says. “If our accomplishments are more modest, it will still recruit massive new investments to this essential field.”

    Successful engineering of RuBisCO would be a scientific feat of its own and ignite renewed enthusiasm for improving plant CO2 fixation. Combined with other advances in photosynthetic engineering, such as improved light usage, a new green revolution in agriculture could be achieved. Long-term impacts of the technology’s success will be measured in improvements to crop yield and grain availability, as well as resilience against yield losses under higher field temperatures. Moreover, improved land productivity together with policy initiatives would assist in reducing the environmental footprint of agriculture. With more “crop per drop,” reductions in water consumption from agriculture would be a major boost to sustainable farming practices.

    “Our collaborative team of biochemists and synthetic biologists, computational biologists, and chemists is deeply integrated with plant biologists and field trial experts, yielding a robust feedback loop for enzyme engineering,” Shoulders adds. “Together, this team will be able to make a concerted effort using the most modern, state-of-the-art techniques to engineer crop RuBisCO with an eye to helping make meaningful gains in securing a stable crop supply, hopefully with accompanying improvements in both food and water security.” More

  • in

    Detailed images from space offer clearer picture of drought effects on plants

    “MIT is a place where dreams come true,” says César Terrer, an assistant professor in the Department of Civil and Environmental Engineering. Here at MIT, Terrer says he’s given the resources needed to explore ideas he finds most exciting, and at the top of his list is climate science. In particular, he is interested in plant-soil interactions, and how the two can mitigate impacts of climate change. In 2022, Terrer received seed grant funding from the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) to produce drought monitoring systems for farmers. The project is leveraging a new generation of remote sensing devices to provide high-resolution plant water stress at regional to global scales.

    Growing up in Granada, Spain, Terrer always had an aptitude and passion for science. He studied environmental science at the University of Murcia, where he interned in the Department of Ecology. Using computational analysis tools, he worked on modeling species distribution in response to human development. Early on in his undergraduate experience, Terrer says he regarded his professors as “superheroes” with a kind of scholarly prowess. He knew he wanted to follow in their footsteps by one day working as a faculty member in academia. Of course, there would be many steps along the way before achieving that dream. 

    Upon completing his undergraduate studies, Terrer set his sights on exciting and adventurous research roles. He thought perhaps he would conduct field work in the Amazon, engaging with native communities. But when the opportunity arose to work in Australia on a state-of-the-art climate change experiment that simulates future levels of carbon dioxide, he headed south to study how plants react to CO2 in a biome of native Australian eucalyptus trees. It was during this experience that Terrer started to take a keen interest in the carbon cycle and the capacity of ecosystems to buffer rising levels of CO2 caused by human activity.

    Around 2014, he began to delve deeper into the carbon cycle as he began his doctoral studies at Imperial College London. The primary question Terrer sought to answer during his PhD was “will plants be able to absorb predicted future levels of CO2 in the atmosphere?” To answer the question, Terrer became an early adopter of artificial intelligence, machine learning, and remote sensing to analyze data from real-life, global climate change experiments. His findings from these “ground truth” values and observations resulted in a paper in the journal Science. In it, he claimed that climate models most likely overestimated how much carbon plants will be able to absorb by the end of the century, by a factor of three. 

    After postdoctoral positions at Stanford University and the Universitat Autonoma de Barcelona, followed by a prestigious Lawrence Fellowship, Terrer says he had “too many ideas and not enough time to accomplish all those ideas.” He knew it was time to lead his own group. Not long after applying for faculty positions, he landed at MIT. 

    New ways to monitor drought

    Terrer is employing similar methods to those he used during his PhD to analyze data from all over the world for his J-WAFS project. He and postdoc Wenzhe Jiao collect data from remote sensing satellites and field experiments and use machine learning to come up with new ways to monitor drought. Terrer says Jiao is a “remote sensing wizard,” who fuses data from different satellite products to understand the water cycle. With Jiao’s hydrology expertise and Terrer’s knowledge of plants, soil, and the carbon cycle, the duo is a formidable team to tackle this project.

    According to the U.N. World Meteorological Organization, the number and duration of droughts has increased by 29 percent since 2000, as compared to the two previous decades. From the Horn of Africa to the Western United States, drought is devastating vegetation and severely stressing water supplies, compromising food production and spiking food insecurity. Drought monitoring can offer fundamental information on drought location, frequency, and severity, but assessing the impact of drought on vegetation is extremely challenging. This is because plants’ sensitivity to water deficits varies across species and ecosystems. 

    Terrer and Jiao are able to obtain a clearer picture of how drought is affecting plants by employing the latest generation of remote sensing observations, which offer images of the planet with incredible spatial and temporal resolution. Satellite products such as Sentinel, Landsat, and Planet can provide daily images from space with such high resolution that individual trees can be discerned. Along with the images and datasets from satellites, the team is using ground-based observations from meteorological data. They are also using the MIT SuperCloud at MIT Lincoln Laboratory to process and analyze all of the data sets. The J-WAFS project is among one of the first to leverage high-resolution data to quantitatively measure plant drought impacts in the United States with the hopes of expanding to a global assessment in the future.

    Assisting farmers and resource managers 

    Every week, the U.S. Drought Monitor provides a map of drought conditions in the United States. The map has zero resolution and is more of a drought recap or summary, unable to predict future drought scenarios. The lack of a comprehensive spatiotemporal evaluation of historic and future drought impacts on global vegetation productivity is detrimental to farmers both in the United States and worldwide.  

    Terrer and Jiao plan to generate metrics for plant water stress at an unprecedented resolution of 10-30 meters. This means that they will be able to provide drought monitoring maps at the scale of a typical U.S. farm, giving farmers more precise, useful data every one to two days. The team will use the information from the satellites to monitor plant growth and soil moisture, as well as the time lag of plant growth response to soil moisture. In this way, Terrer and Jiao say they will eventually be able to create a kind of “plant water stress forecast” that may be able to predict adverse impacts of drought four weeks in advance. “According to the current soil moisture and lagged response time, we hope to predict plant water stress in the future,” says Jiao. 

    The expected outcomes of this project will give farmers, land and water resource managers, and decision-makers more accurate data at the farm-specific level, allowing for better drought preparation, mitigation, and adaptation. “We expect to make our data open-access online, after we finish the project, so that farmers and other stakeholders can use the maps as tools,” says Jiao. 

    Terrer adds that the project “has the potential to help us better understand the future states of climate systems, and also identify the regional hot spots more likely to experience water crises at the national, state, local, and tribal government scales.” He also expects the project will enhance our understanding of global carbon-water-energy cycle responses to drought, with applications in determining climate change impacts on natural ecosystems as a whole. More

  • in

    Integrating humans with AI in structural design

    Modern fabrication tools such as 3D printers can make structural materials in shapes that would have been difficult or impossible using conventional tools. Meanwhile, new generative design systems can take great advantage of this flexibility to create innovative designs for parts of a new building, car, or virtually any other device.

    But such “black box” automated systems often fall short of producing designs that are fully optimized for their purpose, such as providing the greatest strength in proportion to weight or minimizing the amount of material needed to support a given load. Fully manual design, on the other hand, is time-consuming and labor-intensive.

    Now, researchers at MIT have found a way to achieve some of the best of both of these approaches. They used an automated design system but stopped the process periodically to allow human engineers to evaluate the work in progress and make tweaks or adjustments before letting the computer resume its design process. Introducing a few of these iterations produced results that performed better than those designed by the automated system alone, and the process was completed more quickly compared to the fully manual approach.

    The results are reported this week in the journal Structural and Multidisciplinary Optimization, in a paper by MIT doctoral student Dat Ha and assistant professor of civil and environmental engineering Josephine Carstensen.

    The basic approach can be applied to a broad range of scales and applications, Carstensen explains, for the design of everything from biomedical devices to nanoscale materials to structural support members of a skyscraper. Already, automated design systems have found many applications. “If we can make things in a better way, if we can make whatever we want, why not make it better?” she asks.

    “It’s a way to take advantage of how we can make things in much more complex ways than we could in the past,” says Ha, adding that automated design systems have already begun to be widely used over the last decade in automotive and aerospace industries, where reducing weight while maintaining structural strength is a key need.

    “You can take a lot of weight out of components, and in these two industries, everything is driven by weight,” he says. In some cases, such as internal components that aren’t visible, appearance is irrelevant, but for other structures aesthetics may be important as well. The new system makes it possible to optimize designs for visual as well as mechanical properties, and in such decisions the human touch is essential.

    As a demonstration of their process in action, the researchers designed a number of structural load-bearing beams, such as might be used in a building or a bridge. In their iterations, they saw that the design has an area that could fail prematurely, so they selected that feature and required the program to address it. The computer system then revised the design accordingly, removing the highlighted strut and strengthening some other struts to compensate, and leading to an improved final design.

    The process, which they call Human-Informed Topology Optimization, begins by setting out the needed specifications — for example, a beam needs to be this length, supported on two points at its ends, and must support this much of a load. “As we’re seeing the structure evolve on the computer screen in response to initial specification,” Carstensen says, “we interrupt the design and ask the user to judge it. The user can select, say, ‘I’m not a fan of this region, I’d like you to beef up or beef down this feature size requirement.’ And then the algorithm takes into account the user input.”

    While the result is not as ideal as what might be produced by a fully rigorous yet significantly slower design algorithm that considers the underlying physics, she says it can be much better than a result generated by a rapid automated design system alone. “You don’t get something that’s quite as good, but that was not necessarily the goal. What we can show is that instead of using several hours to get something, we can use 10 minutes and get something much better than where we started off.”

    The system can be used to optimize a design based on any desired properties, not just strength and weight. For example, it can be used to minimize fracture or buckling, or to reduce stresses in the material by softening corners.

    Carstensen says, “We’re not looking to replace the seven-hour solution. If you have all the time and all the resources in the world, obviously you can run these and it’s going to give you the best solution.” But for many situations, such as designing replacement parts for equipment in a war zone or a disaster-relief area with limited computational power available, “then this kind of solution that catered directly to your needs would prevail.”

    Similarly, for smaller companies manufacturing equipment in essentially “mom and pop” businesses, such a simplified system might be just the ticket. The new system they developed is not only simple and efficient to run on smaller computers, but it also requires far less training to produce useful results, Carstensen says. A basic two-dimensional version of the software, suitable for designing basic beams and structural parts, is freely available now online, she says, as the team continues to develop a full 3D version.

    “The potential applications of Prof Carstensen’s research and tools are quite extraordinary,” says Christian Málaga-Chuquitaype, a professor of civil and environmental engineering at Imperial College London, who was not associated with this work. “With this work, her group is paving the way toward a truly synergistic human-machine design interaction.”

    “By integrating engineering ‘intuition’ (or engineering ‘judgement’) into a rigorous yet computationally efficient topology optimization process, the human engineer is offered the possibility of guiding the creation of optimal structural configurations in a way that was not available to us before,” he adds. “Her findings have the potential to change the way engineers tackle ‘day-to-day’ design tasks.” More