More stories

  • in

    To decarbonize the chemical industry, electrify it

    The chemical industry is the world’s largest industrial energy consumer and the third-largest source of industrial emissions, according to the International Energy Agency. In 2019, the industrial sector as a whole was responsible for 24 percent of global greenhouse gas emissions. And yet, as the world races to find pathways to decarbonization, the chemical industry has been largely untouched.

    “When it comes to climate action and dealing with the emissions that come from the chemical sector, the slow pace of progress is partly technical and partly driven by the hesitation on behalf of policymakers to overly impact the economic competitiveness of the sector,” says Dharik Mallapragada, a principal research scientist at the MIT Energy Initiative.

    With so many of the items we interact with in our daily lives — from soap to baking soda to fertilizer — deriving from products of the chemical industry, the sector has become a major source of economic activity and employment for many nations, including the United States and China. But as the global demand for chemical products continues to grow, so do the industry’s emissions.

    New sustainable chemical production methods need to be developed and deployed and current emission-intensive chemical production technologies need to be reconsidered, urge the authors of a new paper published in Joule. Researchers from DC-MUSE, a multi-institution research initiative, argue that electrification powered by low-carbon sources should be viewed more broadly as a viable decarbonization pathway for the chemical industry. In this paper, they shine a light on different potential methods to do just that.

    “Generally, the perception is that electrification can play a role in this sector — in a very narrow sense — in that it can replace fossil fuel combustion by providing the heat that the combustion is providing,” says Mallapragada, a member of DC-MUSE. “What we argue is that electrification could be much more than that.”

    The researchers outline four technological pathways — ranging from more mature, near-term options to less technologically mature options in need of research investment — and present the opportunities and challenges associated with each.

    The first two pathways directly replace fossil fuel-produced heat (which facilitates the reactions inherent in chemical production) with electricity or electrochemically generated hydrogen. The researchers suggest that both options could be deployed now and potentially be used to retrofit existing facilities. Electrolytic hydrogen is also highlighted as an opportunity to replace fossil fuel-produced hydrogen (a process that emits carbon dioxide) as a critical chemical feedstock. In 2020, fossil-based hydrogen supplied nearly all hydrogen demand (90 megatons) in the chemical and refining industries — hydrogen’s largest consumers.

    The researchers note that increasing the role of electricity in decarbonizing the chemical industry will directly affect the decarbonization of the power grid. They stress that to successfully implement these technologies, their operation must coordinate with the power grid in a mutually beneficial manner to avoid overburdening it. “If we’re going to be serious about decarbonizing the sector and relying on electricity for that, we have to be creative in how we use it,” says Mallapragada. “Otherwise we run the risk of having addressed one problem, while creating a massive problem for the grid in the process.”

    Electrified processes have the potential to be much more flexible than conventional fossil fuel-driven processes. This can reduce the cost of chemical production by allowing producers to shift electricity consumption to times when the cost of electricity is low. “Process flexibility is particularly impactful during stressed power grid conditions and can help better accommodate renewable generation resources, which are intermittent and are often poorly correlated with daily power grid cycles,” says Yury Dvorkin, an associate research professor at the Johns Hopkins Ralph O’Connor Sustainable Energy Institute. “It’s beneficial for potential adopters because it can help them avoid consuming electricity during high-price periods.”

    Dvorkin adds that some intermediate energy carriers, such as hydrogen, can potentially be used as highly efficient energy storage for day-to-day operations and as long-term energy storage. This would help support the power grid during extreme events when traditional and renewable generators may be unavailable. “The application of long-duration storage is of particular interest as this is a key enabler of a low-emissions society, yet not widespread beyond pumped hydro units,” he says. “However, as we envision electrified chemical manufacturing, it is important to ensure that the supplied electricity is sourced from low-emission generators to prevent emissions leakages from the chemical to power sector.” 

    The next two pathways introduced — utilizing electrochemistry and plasma — are less technologically mature but have the potential to replace energy- and carbon-intensive thermochemical processes currently used in the industry. By adopting electrochemical processes or plasma-driven reactions instead, chemical transformations can occur at lower temperatures and pressures, potentially enhancing efficiency. “These reaction pathways also have the potential to enable more flexible, grid-responsive plants and the deployment of modular manufacturing plants that leverage distributed chemical feedstocks such as biomass waste — further enhancing sustainability in chemical manufacturing,” says Miguel Modestino, the director of the Sustainable Engineering Initiative at the New York University Tandon School of Engineering.

    A large barrier to deep decarbonization of chemical manufacturing relates to its complex, multi-product nature. But, according to the researchers, each of these electricity-driven pathways supports chemical industry decarbonization for various feedstock choices and end-of-life disposal decisions. Each should be evaluated in comprehensive techno-economic and environmental life cycle assessments to weigh trade-offs and establish suitable cost and performance metrics.

    Regardless of the pathway chosen, the researchers stress the need for active research and development and deployment of these technologies. They also emphasize the importance of workforce training and development running in parallel to technology development. As André Taylor, the director of DC-MUSE, explains, “There is a healthy skepticism in the industry regarding electrification and adoption of these technologies, as it involves processing chemicals in a new way.” The workforce at different levels of the industry hasn’t necessarily been exposed to ideas related to the grid, electrochemistry, or plasma. The researchers say that workforce training at all levels will help build greater confidence in these different solutions and support customer-driven industry adoption.

    “There’s no silver bullet, which is kind of the standard line with all climate change solutions,” says Mallapragada. “Each option has pros and cons, as well as unique advantages. But being aware of the portfolio of options in which you can use electricity allows us to have a better chance of success and of reducing emissions — and doing so in a way that supports grid decarbonization.”

    This work was supported, in part, by the Alfred P. Sloan Foundation. More

  • in

    Chess players face a tough foe: air pollution

    Here’s something else chess players need to keep in check: air pollution.

    That’s the bottom line of a newly published study co-authored by an MIT researcher, showing that chess players perform objectively worse and make more suboptimal moves, as measured by a computerized analysis of their games, when there is more fine particulate matter in the air.

    More specifically, given a modest increase in fine particulate matter, the probability that chess players will make an error increases by 2.1 percentage points, and the magnitude of those errors increases by 10.8 percent. In this setting, at least, cleaner air leads to clearer heads and sharper thinking.

    “We find that when individuals are exposed to higher levels of air pollution, they make more more mistakes, and they make larger mistakes,” says Juan Palacios, an economist in MIT’s Sustainable Urbanization Lab, and co-author of a newly published paper detailing the study’s findings.

    The paper, “Indoor Air Quality and Strategic Decision-Making,” appears today in advance online form in the journal Management Science. The authors are Steffen Künn, an associate professor in the School of Business and Economics at Maastricht University, the Netherlands; Palacios, who is head of research in the Sustainable Urbanization Lab, in MIT’s Department of Urban Studies and Planning (DUSP); and Nico Pestel, an associate professor in the School of Business and Economics at Maastricht University.

    The toughest foe yet?

    Fine particulate matter refers to tiny particles 2.5 microns or less in diameter, notated as PM2.5. They are often associated with burning matter — whether through internal combustion engines in autos, coal-fired power plants, forest fires, indoor cooking through open fires, and more. The World Health Organization estimates that air pollution leads to over 4 million premature deaths worldwide every year, due to cancer, cardiovascular problems, and other illnesses.

    Scholars have produced many studies exploring the effects of air pollution on cognition. The current study adds to that literature by analyzing the subject in a particularly controlled setting. The researchers studied the performance of 121 chess players in three seven-round tournaments in Germany in 2017, 2018, and 2019, comprising more than 30,000 chess moves. The scholars used three web-connected sensors inside the tournament venue to measure carbon dioxide, PM2.5 concentrations, and temperature, all of which can be affected by external conditions, even in an indoor setting. Because each tournament lasted eight weeks, it was possible to examine how air-quality changes related to changes in player performance.

    In a replication exercise, the authors found the same impacts of air pollution on some of the strongest players in the history of chess using data from 20 years of games from the first division of the German chess league. 

    To evaluate the matter of performance of players, meanwhile, the scholars used software programs that assess each move made in each chess match, identify optimal decisions, and flag significant errors.

    During the tournaments, PM2.5 concentrations ranged from 14 to 70 micrograms per cubic meter of air, levels of exposure commonly found in cities in the U.S. and elsewhere. The researchers examined and ruled out alternate potential explanations for the dip in player performance, such as increased noise. They also found that carbon dioxide and temperature changes did not correspond to performance changes. Using the standardized ratings chess players earn, the scholars also accounted for the quality of opponents each player faced. Ultimately, the analysis using the plausibly random variation in pollution driven by changes in wind direction confirms that the findings are driven by the direct exposure to air particles.

    “It’s pure random exposure to air pollution that is driving these people’s performance,” Palacios says. “Against comparable opponents in the same tournament round, being exposed to different levels of air quality makes a difference for move quality and decision quality.”

    The researchers also found that when air pollution was worse, the chess players performed even more poorly when under time constraints. The tournament rules mandated that 40 moves had to be made within 110 minutes; for moves 31-40 in all the matches, an air pollution increase of 10 micrograms per cubic meter led to an increased probability of error of 3.2 percent, with the magnitude of those errors increasing by 17.3 percent.

    “We find it interesting that those mistakes especially occur in the phase of the game where players are facing time pressure,” Palacios says. “When these players do not have the ability to compensate [for] lower cognitive performance with greater deliberation, [that] is where we are observing the largest impacts.”

    “You can live miles away and be affected”

    Palacios emphasizes that, as the study indicates, air pollution may affect people in settings where they might not think it makes a difference.

    “It’s not like you have to live next to a power plant,” Palacios says. “You can live miles away and be affected.”

    And while the focus of this particular study is tightly focused on chess players, the authors write in the paper that the findings have “strong implications for high-skilled office workers,” who might also be faced with tricky cognitive tasks in conditions of variable air pollution. In this sense, Palacios says, “The idea is to provide accurate estimates to policymakers who are making difficult decisions about cleaning up the environment.”

    Indeed, Palacios observes, the fact that even chess players — who spend untold hours preparing themselves for all kinds of scenarios they may face in matches — can perform worse when air pollution rises suggests that a similar problem could affect people cognitively in many other settings.

    “There are more and more papers showing that there is a cost with air pollution, and there is a cost for more and more people,” Palacios says. “And this is just one example showing that even for these very [excellent] chess players, who think they can beat everything — well, it seems that with air pollution, they have an enemy who harms them.”

    Support for the study was provided, in part, by the Graduate School of Business and Economics at Maastricht, and the Institute for Labor Economics in Bonn, Germany. More

  • in

    Computers that power self-driving cars could be a huge driver of global carbon emissions

    In the future, the energy needed to run the powerful computers on board a global fleet of autonomous vehicles could generate as many greenhouse gas emissions as all the data centers in the world today.

    That is one key finding of a new study from MIT researchers that explored the potential energy consumption and related carbon emissions if autonomous vehicles are widely adopted.

    The data centers that house the physical computing infrastructure used for running applications are widely known for their large carbon footprint: They currently account for about 0.3 percent of global greenhouse gas emissions, or about as much carbon as the country of Argentina produces annually, according to the International Energy Agency. Realizing that less attention has been paid to the potential footprint of autonomous vehicles, the MIT researchers built a statistical model to study the problem. They determined that 1 billion autonomous vehicles, each driving for one hour per day with a computer consuming 840 watts, would consume enough energy to generate about the same amount of emissions as data centers currently do.

    The researchers also found that in over 90 percent of modeled scenarios, to keep autonomous vehicle emissions from zooming past current data center emissions, each vehicle must use less than 1.2 kilowatts of power for computing, which would require more efficient hardware. In one scenario — where 95 percent of the global fleet of vehicles is autonomous in 2050, computational workloads double every three years, and the world continues to decarbonize at the current rate — they found that hardware efficiency would need to double faster than every 1.1 years to keep emissions under those levels.

    “If we just keep the business-as-usual trends in decarbonization and the current rate of hardware efficiency improvements, it doesn’t seem like it is going to be enough to constrain the emissions from computing onboard autonomous vehicles. This has the potential to become an enormous problem. But if we get ahead of it, we could design more efficient autonomous vehicles that have a smaller carbon footprint from the start,” says first author Soumya Sudhakar, a graduate student in aeronautics and astronautics.

    Sudhakar wrote the paper with her co-advisors Vivienne Sze, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE); and Sertac Karaman, associate professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems (LIDS). The research appears today in the January-February issue of IEEE Micro.

    Modeling emissions

    The researchers built a framework to explore the operational emissions from computers on board a global fleet of electric vehicles that are fully autonomous, meaning they don’t require a back-up human driver.

    The model is a function of the number of vehicles in the global fleet, the power of each computer on each vehicle, the hours driven by each vehicle, and the carbon intensity of the electricity powering each computer.

    “On its own, that looks like a deceptively simple equation. But each of those variables contains a lot of uncertainty because we are considering an emerging application that is not here yet,” Sudhakar says.

    For instance, some research suggests that the amount of time driven in autonomous vehicles might increase because people can multitask while driving and the young and the elderly could drive more. But other research suggests that time spent driving might decrease because algorithms could find optimal routes that get people to their destinations faster.

    In addition to considering these uncertainties, the researchers also needed to model advanced computing hardware and software that doesn’t exist yet.

    To accomplish that, they modeled the workload of a popular algorithm for autonomous vehicles, known as a multitask deep neural network because it can perform many tasks at once. They explored how much energy this deep neural network would consume if it were processing many high-resolution inputs from many cameras with high frame rates, simultaneously.

    When they used the probabilistic model to explore different scenarios, Sudhakar was surprised by how quickly the algorithms’ workload added up.

    For example, if an autonomous vehicle has 10 deep neural networks processing images from 10 cameras, and that vehicle drives for one hour a day, it will make 21.6 million inferences each day. One billion vehicles would make 21.6 quadrillion inferences. To put that into perspective, all of Facebook’s data centers worldwide make a few trillion inferences each day (1 quadrillion is 1,000 trillion).

    “After seeing the results, this makes a lot of sense, but it is not something that is on a lot of people’s radar. These vehicles could actually be using a ton of computer power. They have a 360-degree view of the world, so while we have two eyes, they may have 20 eyes, looking all over the place and trying to understand all the things that are happening at the same time,” Karaman says.

    Autonomous vehicles would be used for moving goods, as well as people, so there could be a massive amount of computing power distributed along global supply chains, he says. And their model only considers computing — it doesn’t take into account the energy consumed by vehicle sensors or the emissions generated during manufacturing.

    Keeping emissions in check

    To keep emissions from spiraling out of control, the researchers found that each autonomous vehicle needs to consume less than 1.2 kilowatts of energy for computing. For that to be possible, computing hardware must become more efficient at a significantly faster pace, doubling in efficiency about every 1.1 years.

    One way to boost that efficiency could be to use more specialized hardware, which is designed to run specific driving algorithms. Because researchers know the navigation and perception tasks required for autonomous driving, it could be easier to design specialized hardware for those tasks, Sudhakar says. But vehicles tend to have 10- or 20-year lifespans, so one challenge in developing specialized hardware would be to “future-proof” it so it can run new algorithms.

    In the future, researchers could also make the algorithms more efficient, so they would need less computing power. However, this is also challenging because trading off some accuracy for more efficiency could hamper vehicle safety.

    Now that they have demonstrated this framework, the researchers want to continue exploring hardware efficiency and algorithm improvements. In addition, they say their model can be enhanced by characterizing embodied carbon from autonomous vehicles — the carbon emissions generated when a car is manufactured — and emissions from a vehicle’s sensors.

    While there are still many scenarios to explore, the researchers hope that this work sheds light on a potential problem people may not have considered.

    “We are hoping that people will think of emissions and carbon efficiency as important metrics to consider in their designs. The energy consumption of an autonomous vehicle is really critical, not just for extending the battery life, but also for sustainability,” says Sze.

    This research was funded, in part, by the National Science Foundation and the MIT-Accenture Fellowship. More

  • in

    Study: Extreme heat is changing habits of daily life

    Extreme temperatures make people less likely to pursue outdoor activities they would otherwise make part of their daily routine, a new study led by MIT researchers has confirmed.

    The data-rich study, set in China, shows that when hourly temperatures reach 30 degrees Celsius (86 degrees Fahrenheit), people are 5 percent less likely to go to public parks, and when hourly temperatures hit 35 C (95 F), people are 13 percent less likely to go to those parks.

    “We did observe adaptation,” says Siqi Zheng, an MIT professor and co-author of a new paper detailing the study’s findings. She adds: “Environmental hazards hurt the daily quality of life. Yes, people protect themselves [by limiting activity], but they lose the benefit of going out to enjoy themselves in nature, or meeting friends in parks.”

    The research adds to our knowledge about the effects of a warming climate by quantifying the effects of hot temperatures on the activity of people within a given day — how they shift their activities from hotter to cooler time periods — and not just across longer periods of time.

    “We found that if we take into account this within-day adaptation, extreme temperatures actually have a much larger effect on human activity than the previous daily or monthly estimations [indicate],” says Yichun Fan, an MIT doctoral candidate and another of the paper’s co-authors.

    The paper, “Intraday Adaptation to Extreme Temperatures in Outdoor Activity,” is published this week in Nature Scientific Reports. The authors are Fan, a doctoral student in MIT’s Department of Urban Studies and Planning (DUSP); Jianghao Wang, a professor at the Chinese Academy of Sciences; Nick Obradovich, chief scientist at Project Regeneration; and Zheng, who is the STL Champion Professor of Urban and Real Estate Sustainability at MIT’s Center for Real Estate and DUSP, and faculty director of the MIT Center for Real Estate.

    To conduct the study, the researchers used anonymized data for 900 million cellphone users in China in 2017, studying a total of 60 billion separate cellphone location queries per day available through the technology firm Tencent. With this data, the scholars also examined activity in 10,499 parks across the country, comparing useage totals across a range of conditions. And they obtained temperature data from about 2,000 weather stations in China.

    Ultimately, as the scholars write in the paper, they were able to “document large and significant activity-depressing and activity-delaying effects” on park visits as a result of ultrahot temperatures.

    “People have intraday adaptation patterns that hadn’t been documented in the previous literature,” Fan says. “These have important implications about people’s heat exposure and how future climate change will affect people’s activity and health.”

    As Zheng points out, altered use of public spaces affects daily routines not only in terms of individual activity and exercise, but also in terms of social and community life.

    “Extreme climates will reduce people’s opportunities to socialize in cities, or just watch kids playing basketball or soccer, which is not good,” she says. “We want people to have a wide-ranging urban life. There is a social cost to this adaptation.”

    As the research indicates, people clearly adapt to temperature spikes. The data also show that evening use of parks increases on extremely hot days, but only after conditions have cooled down. While that seems like a beneficial adaptation to very hot weather, the scholars citing existing research suggest people may sleep less as a result of making this kind of change to their daily routines.

    “Adaptation also has its own cost,” Fan says. “People significantly increased their nighttime outdoor activity, which means they delayed their nighttime, which will have a significant health implication, when you consider the potential sleep disruption.”

    All told, the study provides data, and a method, for better characterizing the effects on climate change on human activity in detail.

    “If we have more and more granular data about future climate scenarios, they support better predictions about these scenarios, reflecting people’s dynamic behaviors, and the health implications,” says Fan, whose doctoral research incorporates this work and other related studies on climate and urban activity.

    The researchers also note that the research methods used in this study could be applied to additional future studies of many other aspects of urban life, including street-level retail activities, and other things with implications for economic activity, real estate, and urban planning.

    “This relates to many other issues,” Zheng says.

    Jianghao Wang received funding from the National Key Research and Development Program of China, the National Natural Science Foundation of China, and the Youth Innovation Promotion Association of the Chinese Academy of Sciences. More

  • in

    Moving water and earth

    As a river cuts through a landscape, it can operate like a conveyer belt, moving truckloads of sediment over time. Knowing how quickly or slowly this sediment flows can help engineers plan for the downstream impact of restoring a river or removing a dam. But the models currently used to estimate sediment flow can be off by a wide margin.

    An MIT team has come up with a better formula to calculate how much sediment a fluid can push across a granular bed — a process known as bed load transport. The key to the new formula comes down to the shape of the sediment grains.

    It may seem intuitive: A smooth, round stone should skip across a river bed faster than an angular pebble. But flowing water also pushes harder on the angular pebble, which could erase the round stone’s advantage. Which effect wins? Existing sediment transport models surprisingly don’t offer an answer, mainly because the problem of measuring grain shape is too unwieldy: How do you quantify a pebble’s contours?

    The MIT researchers found that instead of considering a grain’s exact shape, they could boil the concept of shape down to two related properties: friction and drag. A grain’s drag, or resistance to fluid flow, relative to its internal friction, the resistance to sliding past other grains, can provide an easy way to gauge the effects of a grain’s shape.

    When they incorporated this new mathematical measure of grain shape into a standard model for bed load transport, the new formula made predictions that matched experiments that the team performed in the lab.

    “Sediment transport is a part of life on Earth’s surface, from the impact of storms on beaches to the gravel nests in mountain streams where salmon lay their eggs,” the team writes of their new study, appearing today in Nature. “Damming and sea level rise have already impacted many such terrains and pose ongoing threats. A good understanding of bed load transport is crucial to our ability to maintain these landscapes or restore them to their natural states.”

    The study’s authors are Eric Deal, Santiago Benavides, Qiong Zhang, Ken Kamrin, and Taylor Perron of MIT, and Jeremy Venditti and Ryan Bradley of Simon Fraser University in Canada.

    Figuring flow

    Video of glass spheres (top) and natural river gravel (bottom) undergoing bed load transport in a laboratory flume, slowed down 17x relative to real time. Average grain diameter is about 5 mm. This video shows how rolling and tumbling natural grains interact with one another in a way that is not possible for spheres. What can’t be seen so easily is that natural grains also experience higher drag forces from the flowing water than spheres do.

    Credit: Courtesy of the researchers

    Previous item
    Next item

    Bed load transport is the process by which a fluid such as air or water drags grains across a bed of sediment, causing the grains to hop, skip, and roll along the surface as a fluid flows through. This movement of sediment in a current is what drives rocks to migrate down a river and sand grains to skip across a desert.

    Being able to estimate bed load transport can help scientists prepare for situations such as urban flooding and coastal erosion. Since the 1930s, one formula has been the go-to model for calculating bed load transport; it’s based on a quantity known as the Shields parameter, after the American engineer who originally derived it. This formula sets a relationship between the force of a fluid pushing on a bed of sediment, and how fast the sediment moves in response. Albert Shields incorporated certain variables into this formula, including the average size and density of a sediment’s grains — but not their shape.

    “People may have backed away from accounting for shape because it’s one of these very scary degrees of freedom,” says Kamrin, a professor of mechanical engineering at MIT. “Shape is not a single number.”

    And yet, the existing model has been known to be off by a factor of 10 in its predictions of sediment flow. The team wondered whether grain shape could be a missing ingredient, and if so, how the nebulous property could be mathematically represented.

    “The trick was to focus on characterizing the effect that shape has on sediment transport dynamics, rather than on characterizing the shape itself,” says Deal.

    “It took some thinking to figure that out,” says Perron, a professor of geology in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “But we went back to derive the Shields parameter, and when you do the math, this ratio of drag to friction falls out.”

    Drag and drop

    Their work showed that the Shields parameter — which predicts how much sediment is transported — can be modified to include not just size and density, but also grain shape, and furthermore, that a grain’s shape can be simply represented by a measure of the grain’s drag and its internal friction. The math seemed to make sense. But could the new formula predict how sediment actually flows?

    To answer this, the researchers ran a series of flume experiments, in which they pumped a current of water through an inclined tank with a floor covered in sediment. They ran tests with sediment of various grain shapes, including beds of round glass beads, smooth glass chips, rectangular prisms, and natural gravel. They measured the amount of sediment that was transported through the tank in a fixed amount of time. They then determined the effect of each sediment type’s grain shape by measuring the grains’ drag and friction.

    For drag, the researchers simply dropped individual grains down through a tank of water and gathered statistics for the time it took the grains of each sediment type to reach the bottom. For instance, a flatter grain type takes a longer time on average, and therefore has greater drag, than a round grain type of the same size and density.

    To measure friction, the team poured grains through a funnel and onto a circular tray, then measured the resulting pile’s angle, or slope — an indication of the grains’ friction, or ability to grip onto each other.

    For each sediment type, they then worked the corresponding shape’s drag and friction into the new formula, and found that it could indeed predict the bedload transport, or the amount of moving sediment that the researchers measured in their experiments.

    The team says the new model more accurately represents sediment flow. Going forward, scientists and engineers can use the model to better gauge how a river bed will respond to scenarios such as sudden flooding from severe weather or the removal of a dam.

    “If you were trying to make a prediction of how fast all that sediment will get evacuated after taking a dam out, and you’re wrong by a factor of three or five, that’s pretty bad,” Perron says. “Now we can do a lot better.”

    This research was supported, in part, by the U.S. Army Research Laboratory. More

  • in

    New MIT internships expand research opportunities in Africa

    With new support from the Office of the Associate Provost for International Activities, MIT International Science and Technology Initiatives (MISTI) and the MIT-Africa program are expanding internship opportunities for MIT students at universities and leading academic research centers in Africa. This past summer, MISTI supported 10 MIT student interns at African universities, significantly more than in any previous year.

    “These internships are an opportunity to better merge the research ecosystem of MIT with academia-based research systems in Africa,” says Evan Lieberman, the Total Professor of Political Science and Contemporary Africa and faculty director for MISTI.

    For decades, MISTI has helped MIT students to learn and explore through international experiential learning opportunities and internships in industries like health care, education, agriculture, and energy. MISTI’s MIT-Africa Seed Fund supports collaborative research between MIT faculty and Africa-based researchers, and the new student research internship opportunities are part of a broader vision for deeper engagement between MIT and research institutions across the African continent.

    While Africa is home to 12.5 percent of the world’s population, it generates less than 1 percent of scientific research output in the form of academic journal publications, according to the African Academy of Sciences. Research internships are one way that MIT can build mutually beneficial partnerships across Africa’s research ecosystem, to advance knowledge and spawn innovation in fields important to MIT and its African counterparts, including health care, biotechnology, urban planning, sustainable energy, and education.

    Ari Jacobovits, managing director of MIT-Africa, notes that the new internships provide additional funding to the lab hosting the MIT intern, enabling them to hire a counterpart student research intern from the local university. This support can make the internships more financially feasible for host institutions and helps to grow the research pipeline.

    With the support of MIT, State University of Zanzibar (SUZA) lecturers Raya Ahmada and Abubakar Bakar were able to hire local students to work alongside MIT graduate students Mel Isidor and Rajan Hoyle. Together the students collaborated over a summer on a mapping project designed to plan and protect Zanzibar’s coastal economy.

    “It’s been really exciting to work with research peers in a setting where we can all learn alongside one another and develop this project together,” says Hoyle.

    Using low-cost drone technology, the students and their local counterparts worked to create detailed maps of Zanzibar to support community planning around resilience projects designed to combat coastal flooding and deforestation and assess climate-related impacts to seaweed farming activities. 

    “I really appreciated learning about how engagement happens in this particular context and how community members understand local environmental challenges and conditions based on research and lived experience,” says Isidor. “This is beneficial for us whether we’re working in an international context or in the United States.”

    For biology major Shaida Nishat, her internship at the University of Cape Town allowed her to work in a vital sphere of public health and provided her with the chance to work with a diverse, international team headed by Associate Professor Salome Maswine, head of the global surgery division and a widely-renowned expert in global surgery, a multidisciplinary field in the sphere of global health focused on improved and equitable surgical outcomes.

    “It broadened my perspective as to how an effort like global surgery ties so many nations together through a common goal that would benefit them all,” says Nishat, who plans to pursue a career in public health.

    For computer science sophomore Antonio L. Ortiz Bigio, the MISTI research internship in Africa was an incomparable experience, culturally and professionally. Bigio interned at the Robotics Autonomous Intelligence and Learning Laboratory at the University of Witwatersrand in Johannesburg, led by Professor Benjamin Rosman, where he developed software to enable a robot to play chess. The experience has inspired Bigio to continue to pursue robotics and machine learning.

    Participating faculty at the host institutions welcomed their MIT interns, and were impressed by their capabilities. Both Rosman and Maswime described their MIT interns as hard-working and valued team members, who had helped to advance their own work.  

    Building strong global partnerships, whether through faculty research, student internships, or other initiatives, takes time and cultivation, explains Jacobovits. Each successful collaboration helps to seed future exchanges and builds interest at MIT and peer institutions in creative partnerships. As MIT continues to deepen its connections to institutions and researchers across Africa, says Jacobovits, “students like Shaida, Rajan, Mel, and Antonio are really effective ambassadors in building those networks.” More

  • in

    Strengthening electron-triggered light emission

    The way electrons interact with photons of light is a key part of many modern technologies, from lasers to solar panels to LEDs. But the interaction is inherently a weak one because of a major mismatch in scale: A wavelength of visible light is about 1,000 times larger than an electron, so the way the two things affect each other is limited by that disparity.

    Now, researchers at MIT and elsewhere have come up with an innovative way to make much stronger interactions between photons and electrons possible, in the process producing a hundredfold increase in the emission of light from a phenomenon called Smith-Purcell radiation. The finding has potential implications for both commercial applications and fundamental scientific research, although it will require more years of research to make it practical.

    The findings are reported today in the journal Nature, in a paper by MIT postdocs Yi Yang (now an assistant professor at the University of Hong Kong) and Charles Roques-Carmes, MIT professors Marin Soljačić and John Joannopoulos, and five others at MIT, Harvard University, and Technion-Israel Institute of Technology.

    In a combination of computer simulations and laboratory experiments, the team found that using a beam of electrons in combination with a specially designed photonic crystal — a slab of silicon on an insulator, etched with an array of nanometer-scale holes — they could theoretically predict stronger emission by many orders of magnitude than would ordinarily be possible in conventional Smith-Purcell radiation. They also experimentally recorded a one hundredfold increase in radiation in their proof-of-concept measurements.

    Unlike other approaches to producing sources of light or other electromagnetic radiation, the free-electron-based method is fully tunable — it can produce emissions of any desired wavelength, simply by adjusting the size of the photonic structure and the speed of the electrons. This may make it especially valuable for making sources of emission at wavelengths that are difficult to produce efficiently, including terahertz waves, ultraviolet light, and X-rays.

    The team has so far demonstrated the hundredfold enhancement in emission using a repurposed electron microscope to function as an electron beam source. But they say that the basic principle involved could potentially enable far greater enhancements using devices specifically adapted for this function.

    The approach is based on a concept called flatbands, which have been widely explored in recent years for condensed matter physics and photonics but have never been applied to affecting the basic interaction of photons and free electrons. The underlying principle involves the transfer of momentum from the electron to a group of photons, or vice versa. Whereas conventional light-electron interactions rely on producing light at a single angle, the photonic crystal is tuned in such a way that it enables the production of a whole range of angles.

    The same process could also be used in the opposite direction, using resonant light waves to propel electrons, increasing their velocity in a way that could potentially be harnessed to build miniaturized particle accelerators on a chip. These might ultimately be able to perform some functions that currently require giant underground tunnels, such as the 30-kilometer-wide Large Hadron Collider in Switzerland.

    “If you could actually build electron accelerators on a chip,” Soljačić says, “you could make much more compact accelerators for some of the applications of interest, which would still produce very energetic electrons. That obviously would be huge. For many applications, you wouldn’t have to build these huge facilities.”

    The new system could also potentially provide a highly controllable X-ray beam for radiotherapy purposes, Roques-Carmes says.

    And the system could be used to generate multiple entangled photons, a quantum effect that could be useful in the creation of quantum-based computational and communications systems, the researchers say. “You can use electrons to couple many photons together, which is a considerably hard problem if using a purely optical approach,” says Yang. “That is one of the most exciting future directions of our work.”

    Much work remains to translate these new findings into practical devices, Soljačić cautions. It may take some years to develop the necessary interfaces between the optical and electronic components and how to connect them on a single chip, and to develop the necessary on-chip electron source producing a continuous wavefront, among other challenges.

    “The reason this is exciting,” Roques-Carmes adds, “is because this is quite a different type of source.” While most technologies for generating light are restricted to very specific ranges of color or wavelength, and “it’s usually difficult to move that emission frequency. Here it’s completely tunable. Simply by changing the velocity of the electrons, you can change the emission frequency. … That excites us about the potential of these sources. Because they’re different, they offer new types of opportunities.”

    But, Soljačić concludes, “in order for them to become truly competitive with other types of sources, I think it will require some more years of research. I would say that with some serious effort, in two to five years they might start competing in at least some areas of radiation.”

    The research team also included Steven Kooi at MIT’s Institute for Soldier Nanotechnologies, Haoning Tang and Eric Mazur at Harvard University, Justin Beroz at MIT, and Ido Kaminer at Technion-Israel Institute of Technology. The work was supported by the U.S. Army Research Office through the Institute for Soldier Nanotechnologies, the U.S. Air Force Office of Scientific Research, and the U.S. Office of Naval Research. More

  • in

    MIT scientists contribute to National Ignition Facility fusion milestone

    On Monday, Dec. 5, at around 1 a.m., a tiny sphere of deuterium-tritium fuel surrounded by a cylindrical can of gold called a hohlraum was targeted by 192 lasers at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory (LLNL) in California. Over the course of billionths of a second, the lasers fired, generating X-rays inside the gold can, and imploding the sphere of fuel.

    On that morning, for the first time ever, the lasers delivered 2.1 megajoules of energy and yielded 3.15 megajoules in return, achieving a historic fusion energy gain well above 1 — a result verified by diagnostic tools developed by the MIT Plasma Science and Fusion Center (PSFC). The use of these tools and their importance was referenced by Arthur Pak, a LLNL staff scientist who spoke at a U.S. Department of Energy press event on Dec. 13 announcing the NIF’s success.

    Johan Frenje, head of the PSFC High-Energy-Density Physics division, notes that this milestone “will have profound implications for laboratory fusion research in general.”

    Since the late 1950s, researchers worldwide have pursued fusion ignition and energy gain in a laboratory, considering it one of the grand challenges of the 21st century. Ignition can only be reached when the internal fusion heating power is high enough to overcome the physical processes that cool the fusion plasma, creating a positive thermodynamic feedback loop that very rapidly increases the plasma temperature. In the case of inertial confinement fusion, the method used at the NIF, ignition can initiate a “fuel burn propagation” into the surrounding dense and cold fuel, and when done correctly, enable fusion-energy gain.

    Frenje and his PSFC division initially designed dozens of diagnostic systems that were implemented at the NIF, including the vitally important magnetic recoil neutron spectrometer (MRS), which measures the neutron energy spectrum, the data from which fusion yield, plasma ion temperature, and spherical fuel pellet compression (“fuel areal density”) can be determined. Overseen by PSFC Research Scientist Maria Gatu Johnson since 2013, the MRS is one of two systems at the NIF relied upon to measure the absolute neutron yield from the Dec. 5 experiment because of its unique ability to accurately interpret an implosion’s neutron signals.

    “Before the announcement of this historic achievement could be made, the LLNL team wanted to wait until Maria had analyzed the MRS data to an adequate level for a fusion yield to be determined,” says Frenje.

    Response around MIT to NIF’s announcement has been enthusiastic and hopeful. “This is the kind of breakthrough that ignites the imagination,” says Vice President for Research Maria Zuber, “reminding us of the wonder of discovery and the possibilities of human ingenuity. Although we have a long, hard path ahead of us before fusion can deliver clean energy to the electrical grid, we should find much reason for optimism in today’s announcement. Innovation in science and technology holds great power and promise to address some of the world’s biggest challenges, including climate change.”

    Frenje also credits the rest of the team at the PSFC’s High-Energy-Density Physics division, the Laboratory for Laser Energetics at the University of Rochester, LLNL, and other collaborators for their support and involvement in this research, as well as the National Nuclear Security Administration of the Department of Energy, which has funded much of their work since the early 1990s. He is also proud of the number of MIT PhDs that have been generated by the High-Energy-Density Physics Division and subsequently hired by LLNL, including the experimental lead for this experiment, Alex Zylstra PhD ’15.

    “This is really a team effort,” says Frenje. “Without the scientific dialogue and the extensive know-how at the HEDP Division, the critical contributions made by the MRS system would not have happened.” More