More stories

  • in

    Expanding energy access in rural Lesotho

    Matt Orosz’s mission for the last 20 years can be explained with a single picture: a satellite image of the world at night, with major cities blazing with light and large swaths of land shrouded in darkness.

    The image reminds Orosz SM ’03, SM ’06, PhD ’12 of what he’s trying to change. Orosz is the CEO of OnePower, an MIT spinout building networks of minigrids powered by solar energy to bring electricity to rural regions of Lesotho.

    There are other companies building minigrids in Africa, but OnePower is the only one to have accomplished the feat in Lesotho, and it’s not hard to understand why. Known as the kingdom in the sky, Lesotho is a small, developing country crossed by mountain ranges and rivers, making it difficult to get electricity to rural regions. Recent estimates suggest that less than half of all households have electricity.

    OnePower’s first minigrid is a small system that has been serving around 200 customers for more than a year. The operation is part of an eight-minigrid project that will provide reliable electricity for the first time to more than 30,000 people, 13 health clinics, 25 schools, and over 100 small businesses.

    Construction on those sites is underway, and Orosz is currently working on a power transmission and road crossing over the Senqu river, the largest in southern Africa. During the project, the operators of a health clinic on the off-grid side of the river let Orosz stay there on the condition that he fix their diesel generator. He got the generator working again, but if everything goes according to plan, the clinic won’t need it for much longer.

    “If you don’t have power, then you don’t have lights, you don’t have computers, you don’t have communications,” Orosz says. “That means hospitals can’t refer patients or get expert opinions or run equipment, and schools can’t get internet. When the fundamental institutions for health and education don’t have power, their effectiveness is pretty limited, which affects quality of life for everybody that lives in the area.”

    Finding a spark

    The health clinic Orosz is staying in isn’t far from where he first learned about energy access problems in rural Africa. Between 2000 and 2002, Orosz lived in Lesotho, without electricity, as a member of the Peace Corps. The experience inspired him to help, but without an engineering background, he knew he’d need to gain more skills first.

    “I applied to MIT so that I could gain some knowledge and experience and apply it in this setting,” Orosz says, noting he spent a lot longer at MIT than he initially intended.

    Orosz first joined the research lab of Harry Hemond, the William E Leonhard Professor of Civil and Environmental Engineering, learning about topics like physics and fluid mechanics as part of his first year at MIT. After that, he enrolled in another master’s program in technology and policy. In 2007, he began a PhD at MIT studying solar thermal and photovoltaic hybrid power generation.

    The education wasn’t the only reason Orosz stayed at MIT. Throughout his time on campus, he also took advantage of funding opportunities presented by the IDEAS Social Innovation Challenge and the MIT $100K Entrepreneurship Competition (the $50K at the time). Orosz was also awarded a Fulbright scholarship while at MIT, and was selected for grants from the World Bank and the Environmental Protection Agency.

    Orosz also aligned himself closely with MIT D-Lab. During his second master’s, he led trips to Lesotho with other D-Lab students. Between his master’s and his PhD, Orosz spent a year living in Lesotho exploring energy solutions with three other MIT students, including Amy Mueller ’02, SM ’03, PhD ’12, who is currently chief financial officer of OnePower.

    In 2015, Orosz moved to Lesotho to work on OnePower full-time. The move coincided with OnePower’s successful bid to develop the first utility-scale solar project in Lesotho, a 20-megawatt project that will sell electricity to Lesotho’s central grid in addition to OnePower’s minigrid work. OnePower expects that project, named Neo 1, to start delivering power to Lesotho’s central electric grid next year.

    “It takes quite a lot of time and money to develop utility scale solar projects, but we’ve been told by investors and partners that seven years is not unusual,” Orosz says. “It kind of reminds me of the time it took to get a PhD — surprisingly long, but corroborated by others’ experiences.”

    In conjunction with the grid-scale project, OnePower also piloted the first privately financed, fully licensed minigrid in Lesotho. The company has also set up minigrids to help power six health care centers in the mountains of Lesotho.

    OnePower’s grid-scale project and its minigrids use industry standard, large-format bifacial solar panels, mounted on single axis tracking substructures designed and built in Lesotho by OnePower, but the minigrids send energy to a powerhouse filled with lithium-ion batteries. From there, transmission lines bring the electricity to different villages, where it powers homes, businesses, schools, health clinics, police stations, churches, and more. A smart meter at each customer’s building tracks electricity usage, and customers use a phone app to pay for their electricity.

    OnePower secured funding for the projects from a network of private investors rather than through grants and donations. By paying the investors back, Orosz says OnePower will be showing that funding such projects can be a profitable investment in addition to an impactful one.

    That’s important because grants and donations will only take minigrid operators so far. Orosz says in order to provide reliable electricity to the entire continent of Africa, a huge amount of private investment is needed.

    “The goal is ultimately to prove that you can make this work: that you can generate electricity and sell it to a customer in Africa, and that revenue enables you to pay back the financier that helped you build the infrastructure in the first place,” Orosz says. “Once you close that loop, then it can scale. That’s the holy grail of minigrids.”

    Orosz believes OnePower is differentiated from other minigrid companies in that it develops and owns more of the value chain, like the tracking substructures that allow solar panels to adjust with the sun, which has helped the company continue operations during the pandemic. The technical innovations his team developed at MIT ultimately help OnePower offer lower electricity prices to people in Lesotho.

    Turning the lights on

    OnePower has doubled its employees over the last year as construction on the eight minigrids ramps up. As his team stays busy rolling those projects out, Orosz is already exploring options for the next cluster of minigrids OnePower will build.

    “If we can solve the economics and logistics in Lesotho, then it should be a lot easier to replicate this in other markets,” Orosz says.

    The goal is to bring OnePower’s minigrids to the rural communities that would benefit from them the most. As the satellite image of earth at night shows, that includes many unelectrified community across sub-Saharan Africa.

    “We think Africans in rural areas should have the same quality of power as Africans in urban areas, and that should be the same quality power as everywhere else in the world,” Orosz says. More

  • in

    Energy storage important to creating affordable, reliable, deeply decarbonized electricity systems

    In deeply decarbonized energy systems utilizing high penetrations of variable renewable energy (VRE), energy storage is needed to keep the lights on and the electricity flowing when the sun isn’t shining and the wind isn’t blowing — when generation from these VRE resources is low or demand is high. The MIT Energy Initiative’s Future of Energy Storage study makes clear the need for energy storage and explores pathways using VRE resources and storage to reach decarbonized electricity systems efficiently by 2050.

    “The Future of Energy Storage,” a new multidisciplinary report from the MIT Energy Initiative (MITEI), urges government investment in sophisticated analytical tools for planning, operation, and regulation of electricity systems in order to deploy and use storage efficiently. Because storage technologies will have the ability to substitute for or complement essentially all other elements of a power system, including generation, transmission, and demand response, these tools will be critical to electricity system designers, operators, and regulators in the future. The study also recommends additional support for complementary staffing and upskilling programs at regulatory agencies at the state and federal levels. 

    Play video

    Why is energy storage so important?

    The MITEI report shows that energy storage makes deep decarbonization of reliable electric power systems affordable. “Fossil fuel power plant operators have traditionally responded to demand for electricity — in any given moment — by adjusting the supply of electricity flowing into the grid,” says MITEI Director Robert Armstrong, the Chevron Professor of Chemical Engineering and chair of the Future of Energy Storage study. “But VRE resources such as wind and solar depend on daily and seasonal variations as well as weather fluctuations; they aren’t always available to be dispatched to follow electricity demand. Our study finds that energy storage can help VRE-dominated electricity systems balance electricity supply and demand while maintaining reliability in a cost-effective manner — that in turn can support the electrification of many end-use activities beyond the electricity sector.”

    The three-year study is designed to help government, industry, and academia chart a path to developing and deploying electrical energy storage technologies as a way of encouraging electrification and decarbonization throughout the economy, while avoiding excessive or inequitable burdens.

    Focusing on three distinct regions of the United States, the study shows the need for a varied approach to energy storage and electricity system design in different parts of the country. Using modeling tools to look out to 2050, the study team also focuses beyond the United States, to emerging market and developing economy (EMDE) countries, particularly as represented by India. The findings highlight the powerful role storage can play in EMDE nations. These countries are expected to see massive growth in electricity demand over the next 30 years, due to rapid overall economic expansion and to increasing adoption of electricity-consuming technologies such as air conditioning. In particular, the study calls attention to the pivotal role battery storage can play in decarbonizing grids in EMDE countries that lack access to low-cost gas and currently rely on coal generation.

    The authors find that investment in VRE combined with storage is favored over new coal generation over the medium and long term in India, although existing coal plants may linger unless forced out by policy measures such as carbon pricing. 

    “Developing countries are a crucial part of the global decarbonization challenge,” says Robert Stoner, the deputy director for science and technology at MITEI and one of the report authors. “Our study shows how they can take advantage of the declining costs of renewables and storage in the coming decades to become climate leaders without sacrificing economic development and modernization.”

    The study examines four kinds of storage technologies: electrochemical, thermal, chemical, and mechanical. Some of these technologies, such as lithium-ion batteries, pumped storage hydro, and some thermal storage options, are proven and available for commercial deployment. The report recommends that the government focus R&D efforts on other storage technologies, which will require further development to be available by 2050 or sooner — among them, projects to advance alternative electrochemical storage technologies that rely on earth-abundant materials. It also suggests government incentives and mechanisms that reward success but don’t interfere with project management. The report calls for the federal government to change some of the rules governing technology demonstration projects to enable more projects on storage. Policies that require cost-sharing in exchange for intellectual property rights, the report argues, discourage the dissemination of knowledge. The report advocates for federal requirements for demonstration projects that share information with other U.S. entities.

    The report says many existing power plants that are being shut down can be converted to useful energy storage facilities by replacing their fossil fuel boilers with thermal storage and new steam generators. This retrofit can be done using commercially available technologies and may be attractive to plant owners and communities — using assets that would otherwise be abandoned as electricity systems decarbonize.  

    The study also looks at hydrogen and concludes that its use for storage will likely depend on the extent to which hydrogen is used in the overall economy. That broad use of hydrogen, the report says, will be driven by future costs of hydrogen production, transportation, and storage — and by the pace of innovation in hydrogen end-use applications. 

    The MITEI study predicts the distribution of hourly wholesale prices or the hourly marginal value of energy will change in deeply decarbonized power systems — with many more hours of very low prices and more hours of high prices compared to today’s wholesale markets. So the report recommends systems adopt retail pricing and retail load management options that reward all consumers for shifting electricity use away from times when high wholesale prices indicate scarcity, to times when low wholesale prices signal abundance. 

    The Future of Energy Storage study is the ninth in MITEI’s “Future of” series, exploring complex and vital issues involving energy and the environment. Previous studies have focused on nuclear power, solar energy, natural gas, geothermal energy, and coal (with capture and sequestration of carbon dioxide emissions), as well as on systems such as the U.S. electric power grid. The Alfred P. Sloan Foundation and the Heising-Simons Foundation provided core funding for MITEI’s Future of Energy Storage study. MITEI members Equinor and Shell provided additional support.  More

  • in

    MIT Climate “Plug-In” highlights first year of progress on MIT’s climate plan

    In a combined in-person and virtual event on Monday, members of the three working groups established last year under MIT’s “Fast Forward” climate action plan reported on the work they’ve been doing to meet the plan’s goals, including reaching zero direct carbon emissions by 2026.

    Introducing the session, Vice President for Research Maria Zuber said that “many universities have climate plans that are inward facing, mostly focused on the direct impacts of their operations on greenhouse gas emissions. And that is really important, but ‘Fast Forward’ is different in that it’s also outward facing — it recognizes climate change as a global crisis.”

    That, she said, “commits us to an all-of-MIT effort to help the world solve the super wicked problem in practice.” That means “helping the world to go as far as it can, as fast as it can, to deploy currently available technologies and policies to reduce greenhouse gas emissions,” while also quickly developing new tools and approaches to deal with the most difficult areas of decarbonization, she said.

    Significant strides have been made in this first year, according to Zuber. The Climate Grand Challenges competition, announced last year as part of the plan, has just announced five flagship projects. “Each of these projects is potentially important in its own right, and is also exemplary of the kinds of bold thinking about climate solutions that the world needs,” she said.

    “We’ve also created new climate-focused institutions within MIT to improve accountability and transparency and to drive action,” Zuber said, including the Climate Nucleus, which comprises heads of labs and departments involved in climate-change work and is led by professors Noelle Selin and Anne White. The “Fast Forward” plan also established three working groups that report to the Climate Nucleus — on climate education, climate policy, and MIT’s carbon footprint — whose members spoke at Monday’s event.

    David McGee, a professor of earth, atmospheric and planetary science, co-director of MIT’s Terrascope program for first-year students, and co-chair of the education working group, said that over the last few years of Terrascope, “we’ve begun focusing much more explicitly on the experiences of, and the knowledge contained within, impacted communities … both for mitigation efforts and how they play out, and also adaptation.” Figuring out how to access the expertise of local communities “in a way that’s not extractive is a challenge that we face,” he added.

    Eduardo Rivera, managing director for MIT International Science and Technology Initiatives (MISTI) programs in several countries and a member of the education team, noted that about 1,000 undergraduates travel each year to work on climate and sustainability challenges. These include, for example, working with a lab in Peru assessing pollution in the Amazon, developing new insulation materials in Germany, developing affordable solar panels in China, working on carbon-capture technology in France or Israel, and many others, Rivera said. These are “unique opportunities to learn about the discipline, where the students can do hands-on work along with the professionals and the scientists in the front lines.” He added that MISTI has just launched a pilot project to help these students “to calculate their carbon footprint, to give them resources, and to understand individual responsibilities and collective responsibilities in this area.”

    Yujie Wang, a graduate student in architecture and an education working group member, said that during her studies she worked on a project focused on protecting biodiversity in Colombia, and also worked with a startup to reduce pesticide use in farming through digital monitoring. In Colombia, she said, she came to appreciate the value of interactions among researchers using satellite data, with local organizations, institutions and officials, to foster collaboration on solving common problems.

    The second panel addressed policy issues, as reflected by the climate policy working group. David Goldston, director of MIT’s Washington office, said “I think policy is totally central, in that for each part of the climate problem, you really can’t make progress without policy.” Part of that, he said, “involves government activities to help communities, and … to make sure the transition [involving the adoption of new technologies] is as equitable as possible.”

    Goldston said “a lot of the progress that’s been made already, whether it’s movement toward solar and wind energy and many other things, has been really prompted by government policy. I think sometimes people see it as a contest, should we be focusing on technology or policy, but I see them as two sides of the same coin. … You can’t get the technology you need into operation without policy tools, and the policy tools won’t have anything to work with unless technology is developed.”

    As for MIT, he said, “I think everybody at MIT who works on any aspect of climate change should be thinking about what’s the policy aspect of it, how could policy help them? How could they help policymakers? I think we need to coordinate better.” The Institute needs to be more strategic, he said, but “that doesn’t mean MIT advocating for specific policies. It means advocating for climate action and injecting a wide range of ideas into the policy arena.”

    Anushree Chaudhari, a student in economics and in urban studies and planning, said she has been learning about the power of negotiations in her work with Professor Larry Susskind. “What we’re currently working on is understanding why there are so many sources of local opposition to scaling renewable energy projects in the U.S.,” she explained. “Even though over 77 percent of the U.S. population actually is in support of renewables, and renewables are actually economically pretty feasible as their costs have come down in the last two decades, there’s still a huge social barrier to having them become the new norm,” she said. She emphasized that a fair and just energy transition will require listening to community stakeholders, including indigenous groups and low-income communities, and understanding why they may oppose utility-scale solar farms and wind farms.

    Joy Jackson, a graduate student in the Technology and Policy Program, said that the implementation of research findings into policy at state, local, and national levels is a “very messy, nonlinear, sort of chaotic process.” One avenue for research to make its way into policy, she said, is through formal processes, such as congressional testimony. But a lot is also informal, as she learned while working as an intern in government offices, where she and her colleagues reached out to professors, researchers, and technical experts of various kinds while in the very early stages of policy development.

    “The good news,” she said, “is there’s a lot of touch points.”

    The third panel featured members of the working group studying ways to reduce MIT’s own carbon footprint. Julie Newman, head of MIT’s Office of Sustainability and co-chair of that group, summed up MIT’s progress toward its stated goal of achieving net zero carbon emissions by 2026. “I can cautiously say we’re on track for that one,” she said. Despite headwinds in the solar industry due to supply chain issues, she said, “we’re well positioned” to meet that near-term target.

    As for working toward the 2050 target of eliminating all direct emissions, she said, it is “quite a challenge.” But under the leadership of Joe Higgins, the vice president for campus services and stewardship, MIT is implementing a number of measures, including deep energy retrofits, investments in high-performance buildings, an extremely efficient central utilities plant, and more.

    She added that MIT is particularly well-positioned in its thinking about scaling its solutions up. “A couple of years ago we approached a handful of local organizations, and over a couple of years have built a consortium to look at large-scale carbon reduction in the world. And it’s a brilliant partnership,” she said, noting that details are still being worked out and will be reported later.

    The work is challenging, because “MIT was built on coal, this campus was not built to get to zero carbon emissions.” Nevertheless, “we think we’re on track” to meet the ambitious goals of the Fast Forward plan, she said. “We’re going to have to have multiple pathways, because we may come to a pathway that may turn out not to be feasible.”

    Jay Dolan, head of facilities development at MIT’s Lincoln Laboratory, said that campus faces extra hurdles compared to the main MIT campus, as it occupies buildings that are owned and maintained by the U.S. Air Force, not MIT. They are still at the data-gathering stage to see what they can do to improve their emissions, he said, and a website they set up to solicit suggestions for reducing their emissions had received 70 suggestions within a few days, which are still being evaluated. “All that enthusiasm, along with the intelligence at the laboratory, is very promising,” he said.

    Peter Jacobson, a graduate student in Leaders for Global Operations, said that in his experience, projects that are most successful start not from a focus on the technology, but from collaborative efforts working with multiple stakeholders. “I think this is exactly why the Climate Nucleus and our working groups are so important here at MIT,” he said. “We need people tasked with thinking at this campus scale, figuring out what the needs and priorities of all the departments are and looking for those synergies, and aligning those needs across both internal and external stakeholders.”

    But, he added, “MIT’s complexity and scale of operations definitely poses unique challenges. Advanced research is energy hungry, and in many cases we don’t have the technology to decarbonize those research processes yet. And we have buildings of varying ages with varying stages of investment.” In addition, MIT has “a lot of people that it needs to feed, and that need to travel and commute, so that poses additional and different challenges.”

    Asked what individuals can do to help MIT in this process, Newman said, “Begin to leverage and figure out how you connect your research to informing our thinking on campus. We have channels for that.”

    Noelle Selin, co-chair of MIT’s climate nucleus and moderator of the third panel, said in conclusion “we’re really looking for your input into all of these working groups and all of these efforts. This is a whole of campus effort. It’s a whole of world effort to address the climate challenge. So, please get in touch and use this as a call to action.” More

  • in

    MIT expands research collaboration with Commonwealth Fusion Systems to build net energy fusion machine, SPARC

    MIT’s Plasma Science and Fusion Center (PSFC) will substantially expand its fusion energy research and education activities under a new five-year agreement with Institute spinout Commonwealth Fusion Systems (CFS).

    “This expanded relationship puts MIT and PSFC in a prime position to be an even stronger academic leader that can help deliver the research and education needs of the burgeoning fusion energy industry, in part by utilizing the world’s first burning plasma and net energy fusion machine, SPARC,” says PSFC director Dennis Whyte. “CFS will build SPARC and develop a commercial fusion product, while MIT PSFC will focus on its core mission of cutting-edge research and education.”

    Commercial fusion energy has the potential to play a significant role in combating climate change, and there is a concurrent increase in interest from the energy sector, governments, and foundations. The new agreement, administered by the MIT Energy Initiative (MITEI), where CFS is a startup member, will help PSFC expand its fusion technology efforts with a wider variety of sponsors. The collaboration enables rapid execution at scale and technology transfer into the commercial sector as soon as possible.

    This new agreement doubles CFS’ financial commitment to PSFC, enabling greater recruitment and support of students, staff, and faculty. “We’ll significantly increase the number of graduate students and postdocs, and just as important they will be working on a more diverse set of fusion science and technology topics,” notes Whyte. It extends the collaboration between PSFC and CFS that resulted in numerous advances toward fusion power plants, including last fall’s demonstration of a high-temperature superconducting (HTS) fusion electromagnet with record-setting field strength of 20 tesla.

    The combined magnetic fusion efforts at PSFC will surpass those in place during the operations of the pioneering Alcator C-Mod tokamak device that operated from 1993 to 2016. This increase in activity reflects a moment when multiple fusion energy technologies are seeing rapidly accelerating development worldwide, and the emergence of a new fusion energy industry that would require thousands of trained people.

    MITEI director Robert Armstrong adds, “Our goal from the beginning was to create a membership model that would allow startups who have specific research challenges to leverage the MITEI ecosystem, including MIT faculty, students, and other MITEI members. The team at the PSFC and MITEI have worked seamlessly to support CFS, and we are excited for this next phase of the relationship.”

    PSFC is supporting CFS’ efforts toward realizing the SPARC fusion platform, which facilitates rapid development and refinement of elements (including HTS magnets) needed to build ARC, a compact, modular, high-field fusion power plant that would set the stage for commercial fusion energy production. The concepts originated in Whyte’s nuclear science and engineering class 22.63 (Principles of Fusion Engineering) and have been carried forward by students and PSFC staff, many of whom helped found CFS; the new activity will expand research into advanced technologies for the envisioned pilot plant.

    “This has been an incredibly effective collaboration that has resulted in a major breakthrough for commercial fusion with the successful demonstration of revolutionary fusion magnet technology that will enable the world’s first commercially relevant net energy fusion device, SPARC, currently under construction,” says Bob Mumgaard SM ’15, PhD ’15, CEO of Commonwealth Fusion Systems. “We look forward to this next phase in the collaboration with MIT as we tackle the critical research challenges ahead for the next steps toward fusion power plant development.”

    In the push for commercial fusion energy, the next five years are critical, requiring intensive work on materials longevity, heat transfer, fuel recycling, maintenance, and other crucial aspects of power plant development. It will need innovation from almost every engineering discipline. “Having great teams working now, it will cut the time needed to move from SPARC to ARC, and really unleash the creativity. And the thing MIT does so well is cut across disciplines,” says Whyte.

    “To address the climate crisis, the world needs to deploy existing clean energy solutions as widely and as quickly as possible, while at the same time developing new technologies — and our goal is that those new technologies will include fusion power,” says Maria T. Zuber, MIT’s vice president for research. “To make new climate solutions a reality, we need focused, sustained collaborations like the one between MIT and Commonwealth Fusion Systems. Delivering fusion power onto the grid is a monumental challenge, and the combined capabilities of these two organizations are what the challenge demands.”

    On a strategic level, climate change and the imperative need for widely implementable carbon-free energy have helped orient the PSFC team toward scalability. “Building one or 10 fusion plants doesn’t make a difference — we have to build thousands,” says Whyte. “The design decisions we make will impact the ability to do that down the road. The real enemy here is time, and we want to remove as many impediments as possible and commit to funding a new generation of scientific leaders. Those are critically important in a field with as much interdisciplinary integration as fusion.” More

  • in

    Absent legislative victory, the president can still meet US climate goals

    The most recent United Nations climate change report indicates that without significant action to mitigate global warming, the extent and magnitude of climate impacts — from floods to droughts to the spread of disease — could outpace the world’s ability to adapt to them. The latest effort to introduce meaningful climate legislation in the United States Congress, the Build Back Better bill, has stalled. The climate package in that bill — $555 billion in funding for climate resilience and clean energy — aims to reduce U.S. greenhouse gas emissions by about 50 percent below 2005 levels by 2030, the nation’s current Paris Agreement pledge. With prospects of passing a standalone climate package in the Senate far from assured, is there another pathway to fulfilling that pledge?

    Recent detailed legal analysis shows that there is at least one viable option for the United States to achieve the 2030 target without legislative action. Under Section 115 on International Air Pollution of the Clean Air Act, the U.S. Environmental Protection Agency (EPA) could assign emissions targets to the states that collectively meet the national goal. The president could simply issue an executive order to empower the EPA to do just that. But would that be prudent?

    A new study led by researchers at the MIT Joint Program on the Science and Policy of Global Change explores how, under a federally coordinated carbon dioxide emissions cap-and-trade program aligned with the U.S. Paris Agreement pledge and implemented through Section 115 of the Clean Air Act, the EPA might allocate emissions cuts among states. Recognizing that the Biden or any future administration considering this strategy would need to carefully weigh its benefits against its potential political risks, the study highlights the policy’s net economic benefits to the nation.

    The researchers calculate those net benefits by combining the estimated total cost of carbon dioxide emissions reduction under the policy with the corresponding estimated expenditures that would be avoided as a result of the policy’s implementation — expenditures on health care due to particulate air pollution, and on society at large due to climate impacts.

    Assessing three carbon dioxide emissions allocation strategies (each with legal precedent) for implementing Section 115 to return cap-and-trade program revenue to the states and distribute it to state residents on an equal per-capita basis, the study finds that at the national level, the economic net benefits are substantial, ranging from $70 to $150 billion in 2030. The results appear in the journal Environmental Research Letters.

    “Our findings not only show significant net gains to the U.S. economy under a national emissions policy implemented through the Clean Air Act’s Section 115,” says Mei Yuan, a research scientist at the MIT Joint Program and lead author of the study. “They also show the policy impact on consumer costs may differ across states depending on the choice of allocation strategy.”

    The national price on carbon needed to achieve the policy’s emissions target, as well as the policy’s ultimate cost to consumers, are substantially lower than those found in studies a decade earlier, although in line with other recent studies. The researchers speculate that this is largely due to ongoing expansion of ambitious state policies in the electricity sector and declining renewable energy costs. The policy is also progressive, consistent with earlier studies, in that equal lump-sum distribution of allowance revenue to state residents generally leads to net benefits to lower-income households. Regional disparities in consumer costs can be moderated by the allocation of allowances among states.

    State-by-state emissions estimates for the study are derived from MIT’s U.S. Regional Energy Policy model, with electricity sector detail of the Renewable Energy Development System model developed by the U.S. National Renewable Energy Laboratory; air quality benefits are estimated using U.S. EPA and other models; and the climate benefits estimate is based on the social cost of carbon, the U.S. federal government’s assessment of the economic damages that would result from emitting one additional ton of carbon dioxide into the atmosphere (currently $51/ton, adjusted for inflation). 

    “In addition to illustrating the economic, health, and climate benefits of a Section 115 implementation, our study underscores the advantages of a policy that imposes a uniform carbon price across all economic sectors,” says John Reilly, former co-director of the MIT Joint Program and a study co-author. “A national carbon price would serve as a major incentive for all sectors to decarbonize.” More

  • in

    How can we reduce the carbon footprint of global computing?

    The voracious appetite for energy from the world’s computers and communications technology presents a clear threat for the globe’s warming climate. That was the blunt assessment from presenters in the intensive two-day Climate Implications of Computing and Communications workshop held on March 3 and 4, hosted by MIT’s Climate and Sustainability Consortium (MCSC), MIT-IBM Watson AI Lab, and the Schwarzman College of Computing.

    The virtual event featured rich discussions and highlighted opportunities for collaboration among an interdisciplinary group of MIT faculty and researchers and industry leaders across multiple sectors — underscoring the power of academia and industry coming together.

    “If we continue with the existing trajectory of compute energy, by 2040, we are supposed to hit the world’s energy production capacity. The increase in compute energy and demand has been increasing at a much faster rate than the world energy production capacity increase,” said Bilge Yildiz, the Breene M. Kerr Professor in the MIT departments of Nuclear Science and Engineering and Materials Science and Engineering, one of the workshop’s 18 presenters. This computing energy projection draws from the Semiconductor Research Corporations’s decadal report.To cite just one example: Information and communications technology already account for more than 2 percent of global energy demand, which is on a par with the aviation industries emissions from fuel.“We are the very beginning of this data-driven world. We really need to start thinking about this and act now,” said presenter Evgeni Gousev, senior director at Qualcomm.  Innovative energy-efficiency optionsTo that end, the workshop presentations explored a host of energy-efficiency options, including specialized chip design, data center architecture, better algorithms, hardware modifications, and changes in consumer behavior. Industry leaders from AMD, Ericsson, Google, IBM, iRobot, NVIDIA, Qualcomm, Tertill, Texas Instruments, and Verizon outlined their companies’ energy-saving programs, while experts from across MIT provided insight into current research that could yield more efficient computing.Panel topics ranged from “Custom hardware for efficient computing” to “Hardware for new architectures” to “Algorithms for efficient computing,” among others.

    Visual representation of the conversation during the workshop session entitled “Energy Efficient Systems.”

    Image: Haley McDevitt

    Previous item
    Next item

    The goal, said Yildiz, is to improve energy efficiency associated with computing by more than a million-fold.“I think part of the answer of how we make computing much more sustainable has to do with specialized architectures that have very high level of utilization,” said Darío Gil, IBM senior vice president and director of research, who stressed that solutions should be as “elegant” as possible.  For example, Gil illustrated an innovative chip design that uses vertical stacking to reduce the distance data has to travel, and thus reduces energy consumption. Surprisingly, more effective use of tape — a traditional medium for primary data storage — combined with specialized hard drives (HDD), can yield a dramatic savings in carbon dioxide emissions.Gil and presenters Bill Dally, chief scientist and senior vice president of research of NVIDIA; Ahmad Bahai, CTO of Texas Instruments; and others zeroed in on storage. Gil compared data to a floating iceberg in which we can have fast access to the “hot data” of the smaller visible part while the “cold data,” the large underwater mass, represents data that tolerates higher latency. Think about digital photo storage, Gil said. “Honestly, are you really retrieving all of those photographs on a continuous basis?” Storage systems should provide an optimized mix of of HDD for hot data and tape for cold data based on data access patterns.Bahai stressed the significant energy saving gained from segmenting standby and full processing. “We need to learn how to do nothing better,” he said. Dally spoke of mimicking the way our brain wakes up from a deep sleep, “We can wake [computers] up much faster, so we don’t need to keep them running in full speed.”Several workshop presenters spoke of a focus on “sparsity,” a matrix in which most of the elements are zero, as a way to improve efficiency in neural networks. Or as Dally said, “Never put off till tomorrow, where you could put off forever,” explaining efficiency is not “getting the most information with the fewest bits. It’s doing the most with the least energy.”Holistic and multidisciplinary approaches“We need both efficient algorithms and efficient hardware, and sometimes we need to co-design both the algorithm and the hardware for efficient computing,” said Song Han, a panel moderator and assistant professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT.Some presenters were optimistic about innovations already underway. According to Ericsson’s research, as much as 15 percent of the carbon emissions globally can be reduced through the use of existing solutions, noted Mats Pellbäck Scharp, head of sustainability at Ericsson. For example, GPUs are more efficient than CPUs for AI, and the progression from 3G to 5G networks boosts energy savings.“5G is the most energy efficient standard ever,” said Scharp. “We can build 5G without increasing energy consumption.”Companies such as Google are optimizing energy use at their data centers through improved design, technology, and renewable energy. “Five of our data centers around the globe are operating near or above 90 percent carbon-free energy,” said Jeff Dean, Google’s senior fellow and senior vice president of Google Research.Yet, pointing to the possible slowdown in the doubling of transistors in an integrated circuit — or Moore’s Law — “We need new approaches to meet this compute demand,” said Sam Naffziger, AMD senior vice president, corporate fellow, and product technology architect. Naffziger spoke of addressing performance “overkill.” For example, “we’re finding in the gaming and machine learning space we can make use of lower-precision math to deliver an image that looks just as good with 16-bit computations as with 32-bit computations, and instead of legacy 32b math to train AI networks, we can use lower-energy 8b or 16b computations.”

    Visual representation of the conversation during the workshop session entitled “Wireless, networked, and distributed systems.”

    Image: Haley McDevitt

    Previous item
    Next item

    Other presenters singled out compute at the edge as a prime energy hog.“We also have to change the devices that are put in our customers’ hands,” said Heidi Hemmer, senior vice president of engineering at Verizon. As we think about how we use energy, it is common to jump to data centers — but it really starts at the device itself, and the energy that the devices use. Then, we can think about home web routers, distributed networks, the data centers, and the hubs. “The devices are actually the least energy-efficient out of that,” concluded Hemmer.Some presenters had different perspectives. Several called for developing dedicated silicon chipsets for efficiency. However, panel moderator Muriel Medard, the Cecil H. Green Professor in EECS, described research at MIT, Boston University, and Maynooth University on the GRAND (Guessing Random Additive Noise Decoding) chip, saying, “rather than having obsolescence of chips as the new codes come in and in different standards, you can use one chip for all codes.”Whatever the chip or new algorithm, Helen Greiner, CEO of Tertill (a weeding robot) and co-founder of iRobot, emphasized that to get products to market, “We have to learn to go away from wanting to get the absolute latest and greatest, the most advanced processor that usually is more expensive.” She added, “I like to say robot demos are a dime a dozen, but robot products are very infrequent.”Greiner emphasized consumers can play a role in pushing for more energy-efficient products — just as drivers began to demand electric cars.Dean also sees an environmental role for the end user.“We have enabled our cloud customers to select which cloud region they want to run their computation in, and they can decide how important it is that they have a low carbon footprint,” he said, also citing other interfaces that might allow consumers to decide which air flights are more efficient or what impact installing a solar panel on their home would have.However, Scharp said, “Prolonging the life of your smartphone or tablet is really the best climate action you can do if you want to reduce your digital carbon footprint.”Facing increasing demandsDespite their optimism, the presenters acknowledged the world faces increasing compute demand from machine learning, AI, gaming, and especially, blockchain. Panel moderator Vivienne Sze, associate professor in EECS, noted the conundrum.“We can do a great job in making computing and communication really efficient. But there is this tendency that once things are very efficient, people use more of it, and this might result in an overall increase in the usage of these technologies, which will then increase our overall carbon footprint,” Sze said.Presenters saw great potential in academic/industry partnerships, particularly from research efforts on the academic side. “By combining these two forces together, you can really amplify the impact,” concluded Gousev.Presenters at the Climate Implications of Computing and Communications workshop also included: Joel Emer, professor of the practice in EECS at MIT; David Perreault, the Joseph F. and Nancy P. Keithley Professor of EECS at MIT; Jesús del Alamo, MIT Donner Professor and professor of electrical engineering in EECS at MIT; Heike Riel, IBM Fellow and head science and technology at IBM; and Takashi Ando, principal research staff member at IBM Research. The recorded workshop sessions are available on YouTube. More

  • in

    Machine learning, harnessed to extreme computing, aids fusion energy development

    MIT research scientists Pablo Rodriguez-Fernandez and Nathan Howard have just completed one of the most demanding calculations in fusion science — predicting the temperature and density profiles of a magnetically confined plasma via first-principles simulation of plasma turbulence. Solving this problem by brute force is beyond the capabilities of even the most advanced supercomputers. Instead, the researchers used an optimization methodology developed for machine learning to dramatically reduce the CPU time required while maintaining the accuracy of the solution.

    Fusion energyFusion offers the promise of unlimited, carbon-free energy through the same physical process that powers the sun and the stars. It requires heating the fuel to temperatures above 100 million degrees, well above the point where the electrons are stripped from their atoms, creating a form of matter called plasma. On Earth, researchers use strong magnetic fields to isolate and insulate the hot plasma from ordinary matter. The stronger the magnetic field, the better the quality of the insulation that it provides.

    Rodriguez-Fernandez and Howard have focused on predicting the performance expected in the SPARC device, a compact, high-magnetic-field fusion experiment, currently under construction by the MIT spin-out company Commonwealth Fusion Systems (CFS) and researchers from MIT’s Plasma Science and Fusion Center. While the calculation required an extraordinary amount of computer time, over 8 million CPU-hours, what was remarkable was not how much time was used, but how little, given the daunting computational challenge.

    The computational challenge of fusion energyTurbulence, which is the mechanism for most of the heat loss in a confined plasma, is one of the science’s grand challenges and the greatest problem remaining in classical physics. The equations that govern fusion plasmas are well known, but analytic solutions are not possible in the regimes of interest, where nonlinearities are important and solutions encompass an enormous range of spatial and temporal scales. Scientists resort to solving the equations by numerical simulation on computers. It is no accident that fusion researchers have been pioneers in computational physics for the last 50 years.

    One of the fundamental problems for researchers is reliably predicting plasma temperature and density given only the magnetic field configuration and the externally applied input power. In confinement devices like SPARC, the external power and the heat input from the fusion process are lost through turbulence in the plasma. The turbulence itself is driven by the difference in the extremely high temperature of the plasma core and the relatively cool temperatures of the plasma edge (merely a few million degrees). Predicting the performance of a self-heated fusion plasma therefore requires a calculation of the power balance between the fusion power input and the losses due to turbulence.

    These calculations generally start by assuming plasma temperature and density profiles at a particular location, then computing the heat transported locally by turbulence. However, a useful prediction requires a self-consistent calculation of the profiles across the entire plasma, which includes both the heat input and turbulent losses. Directly solving this problem is beyond the capabilities of any existing computer, so researchers have developed an approach that stitches the profiles together from a series of demanding but tractable local calculations. This method works, but since the heat and particle fluxes depend on multiple parameters, the calculations can be very slow to converge.

    However, techniques emerging from the field of machine learning are well suited to optimize just such a calculation. Starting with a set of computationally intensive local calculations run with the full-physics, first-principles CGYRO code (provided by a team from General Atomics led by Jeff Candy) Rodriguez-Fernandez and Howard fit a surrogate mathematical model, which was used to explore and optimize a search within the parameter space. The results of the optimization were compared to the exact calculations at each optimum point, and the system was iterated to a desired level of accuracy. The researchers estimate that the technique reduced the number of runs of the CGYRO code by a factor of four.

    New approach increases confidence in predictionsThis work, described in a recent publication in the journal Nuclear Fusion, is the highest fidelity calculation ever made of the core of a fusion plasma. It refines and confirms predictions made with less demanding models. Professor Jonathan Citrin, of the Eindhoven University of Technology and leader of the fusion modeling group for DIFFER, the Dutch Institute for Fundamental Energy Research, commented: “The work significantly accelerates our capabilities in more routinely performing ultra-high-fidelity tokamak scenario prediction. This algorithm can help provide the ultimate validation test of machine design or scenario optimization carried out with faster, more reduced modeling, greatly increasing our confidence in the outcomes.” 

    In addition to increasing confidence in the fusion performance of the SPARC experiment, this technique provides a roadmap to check and calibrate reduced physics models, which run with a small fraction of the computational power. Such models, cross-checked against the results generated from turbulence simulations, will provide a reliable prediction before each SPARC discharge, helping to guide experimental campaigns and improving the scientific exploitation of the device. It can also be used to tweak and improve even simple data-driven models, which run extremely quickly, allowing researchers to sift through enormous parameter ranges to narrow down possible experiments or possible future machines.

    The research was funded by CFS, with computational support from the National Energy Research Scientific Computing Center, a U.S. Department of Energy Office of Science User Facility. More

  • in

    Using excess heat to improve electrolyzers and fuel cells

    Reducing the use of fossil fuels will have unintended consequences for the power-generation industry and beyond. For example, many industrial chemical processes use fossil-fuel byproducts as precursors to things like asphalt, glycerine, and other important chemicals. One solution to reduce the impact of the loss of fossil fuels on industrial chemical processes is to store and use the heat that nuclear fission produces. New MIT research has dramatically improved a way to put that heat toward generating chemicals through a process called electrolysis. 

    Electrolyzers are devices that use electricity to split water (H2O) and generate molecules of hydrogen (H2) and oxygen (O2). Hydrogen is used in fuel cells to generate electricity and drive electric cars or drones or in industrial operations like the production of steel, ammonia, and polymers. Electrolyzers can also take in water and carbon dioxide (CO2) and produce oxygen and ethylene (C2H4), a chemical used in polymers and elsewhere.

    There are three main types of electrolyzers. One type works at room temperature, but has downsides; they’re inefficient and require rare metals, such as platinum. A second type is more efficient but runs at high temperatures, above 700 degrees Celsius. But metals corrode at that temperature, and the devices need expensive sealing and insulation. The third type would be a Goldilocks solution for nuclear heat if it were perfected, running at 300-600 C and requiring mostly cheap materials like stainless steel. These cells have never been operated as efficiently as theory says they should. The new work, published this month in Nature, both illuminates the problem and offers a solution.

    A sandwich mystery

    The intermediate-temperature devices use what are called protonic ceramic electrochemical cells. Each cell is a sandwich, with a dense electrolyte layered between two porous electrodes. Water vapor is pumped into the top electrode. A wire on the side connects the two electrodes, and externally generated electricity runs from the top to the bottom. The voltage pulls electrons out of the water, which splits the molecule, releasing oxygen. A hydrogen atom without an electron is just a proton. The protons get pulled through the electrolyte to rejoin with the electrons at the bottom electrode and form H2 molecules, which are then collected.

    On its own, the electrolyte in the middle, made mainly of barium, cerium, and zirconium, conducts protons very well. “But when we put the same material into this three-layer device, the proton conductivity of the full cell is pretty bad,” says Yanhao Dong, a postdoc in MIT’s Department of Nuclear Science and Engineering and a paper co-author. “Its conductivity is only about 50 percent of the bulk form’s. We wondered why there’s an inconsistency here.”

    A couple of clues pointed them in the right direction. First, if they don’t prepare the cell very carefully, the top layer, only about 20 microns (.02 millimeters) thick, doesn’t stay attached. “Sometimes if you use just Scotch tape, it will peel off,” Dong says. Second, when they looked at a cross section of a device using a scanning electron microscope, they saw that the top surface of the electrolyte layer was flat, whereas the bottom surface of the porous electrode sitting on it was bumpy, and the two came into contact in only a few places. They didn’t bond well. That precarious interface leads to both structural de-lamination and poor proton passage from the electrode to the electrolyte.

    Acidic solution

    The solution turned out to be simple: researchers roughed up the top of the electrolyte. Specifically, they applied acid for 10 minutes, which etched grooves into the surface. Ju Li, the Battelle Energy Alliance Professor in Nuclear Engineering and professor of materials science and engineering at MIT, and a paper co-author, likens it to sandblasting a surface before applying paint to increase adhesion. Their acid-treated cells produced about 200 percent more hydrogen per area at 1.5 volts at 600 C than did any previous cell of its type, and worked well down to 350 C with very little performance decay over extended operation. 

    “The authors reported a surprisingly simple yet highly effective surface treatment to dramatically improve the interface,” says Liangbing Hu, the director of the Center for Materials Innovation at the Maryland Energy Innovation Institute, who was not involved in the work. He calls the cell performance “exceptional.”

    “We are excited and surprised” by the results, Dong says. “The engineering solution seems quite simple. And that’s actually good, because it makes it very applicable to real applications.” In a practical product, many such cells would be stacked together to form a module. MIT’s partner in the project, Idaho National Laboratory, is very strong in engineering and prototyping, so Li expects to see electrolyzers based on this technology at scale before too long. “At the materials level, this is a breakthrough that shows that at a real-device scale you can work at this sweet spot of temperature of 350 to 600 degrees Celsius for nuclear fission and fusion reactors,” he says.

    “Reduced operating temperature enables cheaper materials for the large-scale assembly, including the stack,” says Idaho National Laboratory researcher and paper co-author Dong Ding. “The technology operates within the same temperature range as several important, current industrial processes, including ammonia production and CO2 reduction. Matching these temperatures will expedite the technology’s adoption within the existing industry.”

    “This is very significant for both Idaho National Lab and us,” Li adds, “because it bridges nuclear energy and renewable electricity.” He notes that the technology could also help fuel cells, which are basically electrolyzers run in reverse, using green hydrogen or hydrocarbons to generate electricity. According to Wei Wu, a materials scientist at Idaho National Laboratory and a paper co-author, “this technique is quite universal and compatible with other solid electrochemical devices.”

    Dong says it’s rare for a paper to advance both science and engineering to such a degree. “We are happy to combine those together and get both very good scientific understanding and also very good real-world performance.”

    This work, done in collaboration with Idaho National Laboratory, New Mexico State University, and the University of Nebraska–Lincoln, was funded, in part, by the U.S. Department of Energy. More