More stories

  • in

    Bridging careers in aerospace manufacturing and fusion energy, with a focus on intentional inclusion

    “A big theme of my life has been focusing on intentional inclusion and how I can create environments where people can really bring their whole authentic selves to work,” says Joy Dunn ’08. As the vice president of operations at Commonwealth Fusion Systems, an MIT spinout working to achieve commercial fusion energy, Dunn looks for solutions to the world’s greatest climate challenges — while creating an open and equitable work environment where everyone can succeed.

    This theme has been cultivated throughout her professional and personal life, including as a Young Global Leader at the World Economic Forum and as a board member at Out for Undergrad, an organization that works with LGBTQ+ college students to help them achieve their personal and professional goals. Through her careers both in aerospace and energy, Dunn has striven to instill a sense of equity and inclusion from the inside out.

    Developing a love for space

    Dunn’s childhood was shaped by space. “I was really inspired as a kid to be an astronaut,” she says, “and for me that never stopped.” Dunn’s parents — both of whom had careers in the aerospace industry — encouraged her from an early age to pursue her interests, from building model rockets to visiting the National Air and Space Museum to attending space camp. A large inspiration for this passion arose when she received a signed photo from Sally Ride — the first American woman in space — that read, “To Joy, reach for the stars.”

    As her interests continued to grow in middle school, she and her mom looked to see what it would take to become an astronaut, asking questions such as “what are the common career paths?” and “what schools did astronauts typically go to?” They quickly found that MIT was at the top of that list, and by seventh grade, Dunn had set her sights on the Institute. 

    After years of hard work, Dunn entered MIT in fall 2004 with a major in aeronautical and astronautical engineering (AeroAstro). At MIT, she remained fully committed to her passion while also expanding into other activities such as varsity softball, the MIT Undergraduate Association, and the Alpha Chi Omega sorority.

    One of the highlights of Dunn’s college career was Unified Engineering, a year-long course required for all AeroAstro majors that provides a foundational knowledge of aerospace engineering — culminating in a team competition where students design and build remote-controlled planes to be pitted against each other. “My team actually got first place, which was very exciting,” she recalls. “And I honestly give a lot of that credit to our pilot. He did a very good job of not crashing!” In fact, that pilot was Warren Hoburg ’08, a former assistant professor in AeroAstro and current NASA astronaut training for a mission on the International Space Station.

    Pursuing her passion at SpaceX

    Dunn’s undergraduate experience culminated with an internship at the aerospace manufacturing company SpaceX in summer 2008. “It was by far my favorite internship of the ones that I had in college. I got to work on really hands-on projects and had the same amount of responsibility as a full-time employee,” she says.

    By the end of the internship, she was hired as a propulsion development engineer for the Dragon spacecraft, where she helped to build the thrusters for the first Dragon mission. Eventually, she transferred to the role of manufacturing engineer. “A lot of what I’ve done in my life is building things and looking for process improvements,” so it was a natural fit. From there, she rose through the ranks, eventually becoming the senior manager of spacecraft manufacturing engineering, where she oversaw all the manufacturing, test, and integration engineers working on Dragon. “It was pretty incredible to go from building thrusters to building the whole vehicle,” she says.

    During her tenure, Dunn also co-founded SpaceX’s Women’s Network and its LGBT affinity group, Out and Allied. “It was about providing spaces for employees to get together and provide a sense of community,” she says. Through these groups, she helped start mentorship and community outreach programs, as well as helped grow the pipeline of women in leadership roles for the company.

    In spite of all her successes at SpaceX, she couldn’t help but think about what came next. “I had been at SpaceX for almost a decade and had these thoughts of, ‘do I want to do another tour of duty or look at doing something else?’ The main criteria I set for myself was to do something that is equally or more world-changing than SpaceX.”

    A pivot to fusion

    It was at this time in 2018 that Dunn received an email from a former mentor asking if she had heard about a fusion energy startup called Commonwealth Fusion Systems (CFS) that worked with the MIT Plasma Science and Fusion Center. “I didn’t know much about fusion at all,” she says. “I had heard about it as a science project that was still many, many years away as a viable energy source.”

    After learning more about the technology and company, “I was just like, ‘holy cow, this has the potential to be even more world-changing than what SpaceX is doing.’” She adds, “I decided that I wanted to spend my time and brainpower focusing on cleaning up the planet instead of getting off it.”

    After connecting with CFS CEO Bob Mumgaard SM ’15, PhD ’15, Dunn joined the company and returned to Cambridge as the head of manufacturing. While moving from the aerospace industry to fusion energy was a large shift, she said her first project — building a fusion-relevant, high-temperature superconducting magnet capable of achieving 20 tesla — tied back into her life of being a builder who likes to get her hands on things.

    Over the course of two years, she oversaw the production and scaling of the magnet manufacturing process. When she first came in, the magnets were being constructed in a time-consuming and manual way. “One of the things I’m most proud of from this project is teaching MIT research scientists how to think like manufacturing engineers,” she says. “It was a great symbiotic relationship. The MIT folks taught us the physics and science behind the magnets, and we came in to figure out how to make them into a more manufacturable product.”

    In September 2021, CFS tested this high-temperature superconducting magnet and achieved its goal of 20 tesla. This was a pivotal moment for the company that brought it one step closer to achieving its goal of producing net-positive fusion power. Now, CFS has begun work on a new campus in Devens, Massachusetts, to house their manufacturing operations and SPARC fusion device. Dunn plays a pivotal role in this expansion as well. In March 2021, she was promoted to the head of operations, which expanded her responsibilities beyond managing manufacturing to include facilities, construction, safety, and quality. “It’s been incredible to watch the campus grow from a pile of dirt … into full buildings.”

    In addition to the groundbreaking work, Dunn highlights the culture of inclusiveness as something that makes CFS stand apart to her. “One of the main reasons that drew me to CFS was hearing from the company founders about their thoughts on diversity, equity, and inclusion, and how they wanted to make that a key focus for their company. That’s been so important in my career, and I’m really excited to see how much that’s valued at CFS.” The company has carried this out through programs such as Fusion Inclusion, an initiative that aims to build a strong and inclusive community from the inside out.

    Dunn stresses “the impact that fusion can have on our world and for addressing issues of environmental injustice through an equitable distribution of power and electricity.” Adding, “That’s a huge lever that we have. I’m excited to watch CFS grow and for us to make a really positive impact on the world in that way.”

    This article appears in the Spring 2022 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    A new method boosts wind farms’ energy output, without new equipment

    Virtually all wind turbines, which produce more than 5 percent of the world’s electricity, are controlled as if they were individual, free-standing units. In fact, the vast majority are part of larger wind farm installations involving dozens or even hundreds of turbines, whose wakes can affect each other.

    Now, engineers at MIT and elsewhere have found that, with no need for any new investment in equipment, the energy output of such wind farm installations can be increased by modeling the wind flow of the entire collection of turbines and optimizing the control of individual units accordingly.

    The increase in energy output from a given installation may seem modest — it’s about 1.2 percent overall, and 3 percent for optimal wind speeds. But the algorithm can be deployed at any wind farm, and the number of wind farms is rapidly growing to meet accelerated climate goals. If that 1.2 percent energy increase were applied to all the world’s existing wind farms, it would be the equivalent of adding more than 3,600 new wind turbines, or enough to power about 3 million homes, and a total gain to power producers of almost a billion dollars per year, the researchers say. And all of this for essentially no cost.

    The research is published today in the journal Nature Energy, in a study led by MIT Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering Michael F. Howland.

    “Essentially all existing utility-scale turbines are controlled ‘greedily’ and independently,” says Howland. The term “greedily,” he explains, refers to the fact that they are controlled to maximize only their own power production, as if they were isolated units with no detrimental impact on neighboring turbines.

    But in the real world, turbines are deliberately spaced close together in wind farms to achieve economic benefits related to land use (on- or offshore) and to infrastructure such as access roads and transmission lines. This proximity means that turbines are often strongly affected by the turbulent wakes produced by others that are upwind from them — a factor that individual turbine-control systems do not currently take into account.

    “From a flow-physics standpoint, putting wind turbines close together in wind farms is often the worst thing you could do,” Howland says. “The ideal approach to maximize total energy production would be to put them as far apart as possible,” but that would increase the associated costs.

    That’s where the work of Howland and his collaborators comes in. They developed a new flow model which predicts the power production of each turbine in the farm depending on the incident winds in the atmosphere and the control strategy of each turbine. While based on flow-physics, the model learns from operational wind farm data to reduce predictive error and uncertainty. Without changing anything about the physical turbine locations and hardware systems of existing wind farms, they have used the physics-based, data-assisted modeling of the flow within the wind farm and the resulting power production of each turbine, given different wind conditions, to find the optimal orientation for each turbine at a given moment. This allows them to maximize the output from the whole farm, not just the individual turbines.

    Today, each turbine constantly senses the incoming wind direction and speed and uses its internal control software to adjust its yaw (vertical axis) angle position to align as closely as possible to the wind. But in the new system, for example, the team has found that by turning one turbine just slightly away from its own maximum output position — perhaps 20 degrees away from its individual peak output angle — the resulting increase in power output from one or more downwind units will more than make up for the slight reduction in output from the first unit. By using a centralized control system that takes all of these interactions into account, the collection of turbines was operated at power output levels that were as much as 32 percent higher under some conditions.

    In a months-long experiment in a real utility-scale wind farm in India, the predictive model was first validated by testing a wide range of yaw orientation strategies, most of which were intentionally suboptimal. By testing many control strategies, including suboptimal ones, in both the real farm and the model, the researchers could identify the true optimal strategy. Importantly, the model was able to predict the farm power production and the optimal control strategy for most wind conditions tested, giving confidence that the predictions of the model would track the true optimal operational strategy for the farm. This enables the use of the model to design the optimal control strategies for new wind conditions and new wind farms without needing to perform fresh calculations from scratch.

    Then, a second months-long experiment at the same farm, which implemented only the optimal control predictions from the model, proved that the algorithm’s real-world effects could match the overall energy improvements seen in simulations. Averaged over the entire test period, the system achieved a 1.2 percent increase in energy output at all wind speeds, and a 3 percent increase at speeds between 6 and 8 meters per second (about 13 to 18 miles per hour).

    While the test was run at one wind farm, the researchers say the model and cooperative control strategy can be implemented at any existing or future wind farm. Howland estimates that, translated to the world’s existing fleet of wind turbines, a 1.2 percent overall energy improvement would produce  more than 31 terawatt-hours of additional electricity per year, approximately equivalent to installing an extra 3,600 wind turbines at no cost. This would translate into some $950 million in extra revenue for the wind farm operators per year, he says.

    The amount of energy to be gained will vary widely from one wind farm to another, depending on an array of factors including the spacing of the units, the geometry of their arrangement, and the variations in wind patterns at that location over the course of a year. But in all cases, the model developed by this team can provide a clear prediction of exactly what the potential gains are for a given site, Howland says. “The optimal control strategy and the potential gain in energy will be different at every wind farm, which motivated us to develop a predictive wind farm model which can be used widely, for optimization across the wind energy fleet,” he adds.

    But the new system can potentially be adopted quickly and easily, he says. “We don’t require any additional hardware installation. We’re really just making a software change, and there’s a significant potential energy increase associated with it.” Even a 1 percent improvement, he points out, means that in a typical wind farm of about 100 units, operators could get the same output with one fewer turbine, thus saving the costs, usually millions of dollars, associated with purchasing, building, and installing that unit.

    Further, he notes, by reducing wake losses the algorithm could make it possible to place turbines more closely together within future wind farms, therefore increasing the power density of wind energy, saving on land (or sea) footprints. This power density increase and footprint reduction could help to achieve pressing greenhouse gas emission reduction goals, which call for a substantial expansion of wind energy deployment, both on and offshore.

    What’s more, he says, the biggest new area of wind farm development is offshore, and “the impact of wake losses is often much higher in offshore wind farms.” That means the impact of this new approach to controlling those wind farms could be significantly greater.

    The Howland Lab and the international team is continuing to refine the models further and working to improve the operational instructions they derive from the model, moving toward autonomous, cooperative control and striving for the greatest possible power output from a given set of conditions, Howland says.

    The research team includes Jesús Bas Quesada, Juan José Pena Martinez, and Felipe Palou Larrañaga of Siemens Gamesa Renewable Energy Innovation and Technology in Navarra, Spain; Neeraj Yadav and Jasvipul Chawla at ReNew Power Private Limited in Haryana, India; Varun Sivaram formerly at ReNew Power Private Limited in Haryana, India and presently at the Office of the U.S. Special Presidential Envoy for Climate, United States Department of State; and John Dabiri at California Institute of Technology. The work was supported by the MIT Energy Initiative and Siemens Gamesa Renewable Energy. More

  • in

    Solving a longstanding conundrum in heat transfer

    It is a problem that has beguiled scientists for a century. But, buoyed by a $625,000 Distinguished Early Career Award from the U.S. Department of Energy (DoE), Matteo Bucci, an associate professor in the Department of Nuclear Science and Engineering (NSE), hopes to be close to an answer.

    Tackling the boiling crisis

    Whether you’re heating a pot of water for pasta or are designing nuclear reactors, one phenomenon — boiling — is vital for efficient execution of both processes.

    “Boiling is a very effective heat transfer mechanism; it’s the way to remove large amounts of heat from the surface, which is why it is used in many high-power density applications,” Bucci says. An example use case: nuclear reactors.

    To the layperson, boiling appears simple — bubbles form and burst, removing heat. But what if so many bubbles form and coalesce that they form a band of vapor that prevents further heat transfer? Such a problem is a known entity and is labeled the boiling crisis. It would lead to runaway heat, and a failure of fuel rods in nuclear reactors. So “understanding and determining under which conditions the boiling crisis is likely to happen is critical to designing more efficient and cost-competitive nuclear reactors,” Bucci says.

    Early work on the boiling crisis dates back nearly a century ago, to 1926. And while much work has been done, “it is clear that we haven’t found an answer,” Bucci says. The boiling crisis remains a challenge because while models abound, the measurement of related phenomena to prove or disprove these models has been difficult. “[Boiling] is a process that happens on a very, very small length scale and over very, very short times,” Bucci says. “We are not able to observe it at the level of detail necessary to understand what really happens and validate hypotheses.”

    But, over the past few years, Bucci and his team have been developing diagnostics that can measure the phenomena related to boiling and thereby provide much-needed answers to a classic problem. Diagnostics are anchored in infrared thermometry and a technique using visible light. “By combining these two techniques I think we’re going to be ready to answer standing questions related to heat transfer, we can make our way out of the rabbit hole,” Bucci says. The grant award from the U.S. DoE for Nuclear Energy Projects will aid in this and Bucci’s other research efforts.

    An idyllic Italian childhood

    Tackling difficult problems is not new territory for Bucci, who grew up in the small town of Città di Castello near Florence, Italy. Bucci’s mother was an elementary school teacher. His father used to have a machine shop, which helped develop Bucci’s scientific bent. “I liked LEGOs a lot when I was a kid. It was a passion,” he adds.

    Despite Italy going through a severe pullback from nuclear engineering during his formative years, the subject fascinated Bucci. Job opportunities in the field were uncertain but Bucci decided to dig in. “If I have to do something for the rest of my life, it might as well be something I like,” he jokes. Bucci attended the University of Pisa for undergraduate and graduate studies in nuclear engineering.

    His interest in heat transfer mechanisms took root during his doctoral studies, a research subject he pursued in Paris at the French Alternative Energies and Atomic Energy Commission (CEA). It was there that a colleague suggested work on the boiling water crisis. This time Bucci set his sights on NSE at MIT and reached out to Professor Jacopo Buongiorno to inquire about research at the institution. Bucci had to fundraise at CEA to conduct research at MIT. He arrived just a couple of days before the Boston Marathon bombing in 2013 with a round-trip ticket. But Bucci has stayed ever since, moving on to become a research scientist and then associate professor at NSE.

    Bucci admits he struggled to adapt to the environment when he first arrived at MIT, but work and friendships with colleagues — he counts NSE’s Guanyu Su and Reza Azizian as among his best friends — helped conquer early worries.

    The integration of artificial intelligence

    In addition to diagnostics for boiling, Bucci and his team are working on ways of integrating artificial intelligence and experimental research. He is convinced that “the integration of advanced diagnostics, machine learning, and advanced modeling tools will blossom in a decade.”

    Bucci’s team is developing an autonomous laboratory for boiling heat transfer experiments. Running on machine learning, the setup decides which experiments to run based on a learning objective the team assigns. “We formulate a question and the machine will answer by optimizing the kinds of experiments that are necessary to answer those questions,” Bucci says, “I honestly think this is the next frontier for boiling,” he adds.

    “It’s when you climb a tree and you reach the top, that you realize that the horizon is much more vast and also more beautiful,” Bucci says of his zeal to pursue more research in the field.

    Even as he seeks new heights, Bucci has not forgotten his origins. Commemorating Italy’s hosting of the World Cup in 1990, a series of posters showcasing a soccer field fitted into the Roman Colosseum occupies pride of place in his home and office. Created by Alberto Burri, the posters are of sentimental value: The (now deceased) Italian artist also hailed from Bucci’s hometown — Città di Castello. More

  • in

    Making hydropower plants more sustainable

    Growing up on a farm in Texas, there was always something for siblings Gia Schneider ’99 and Abe Schneider ’02, SM ’03 to do. But every Saturday at 2 p.m., no matter what, the family would go down to a local creek to fish, build rock dams and rope swings, and enjoy nature.

    Eventually the family began going to a remote river in Colorado each summer. The river forked in two; one side was managed by ranchers who destroyed natural features like beaver dams, while the other side remained untouched. The family noticed the fishing was better on the preserved side, which led Abe to try measuring the health of the two river ecosystems. In high school, he co-authored a study showing there were more beneficial insects in the bed of the river with the beaver dams.

    The experience taught both siblings a lesson that has stuck. Today they are the co-founders of Natel Energy, a company attempting to mimic natural river ecosystems with hydropower systems that are more sustainable than conventional hydro plants.

    “The big takeaway for us, and what we’ve been doing all this time, is thinking of ways that infrastructure can help increase the health of our environment — and beaver dams are a good example of infrastructure that wouldn’t otherwise be there that supports other populations of animals,” Abe says. “It’s a motivator for the idea that hydropower can help improve the environment rather than destroy the environment.”

    Through new, fish-safe turbines and other features designed to mimic natural river conditions, the founders say their plants can bridge the gap between power-plant efficiency and environmental sustainability. By retrofitting existing hydropower plants and developing new projects, the founders believe they can supercharge a hydropower industry that is by far the largest source of renewable electricity in the world but has not grown in energy generation as much as wind and solar in recent years.

    “Hydropower plants are built today with only power output in mind, as opposed to the idea that if we want to unlock growth, we have to solve for both efficiency and river sustainability,” Gia says.

    A life’s mission

    The origins of Natel came not from a single event but from a lifetime of events. Abe and Gia’s father was an inventor and renewable energy enthusiast who designed and built the log cabin they grew up in. With no television, the kids’ preferred entertainment was reading books or being outside. The water in their house was pumped by power generated using a mechanical windmill on the north side of the house.

    “We grew up hanging clothes on a line, and it wasn’t because we were too poor to own a dryer, but because everything about our existence and our use of energy was driven by the idea that we needed to make conscious decisions about sustainability,” Abe says.

    One of the things that fascinated both siblings was hydropower. In high school, Abe recalls bugging his friend who was good at math to help him with designs for new hydro turbines.

    Both siblings admit coming to MIT was a major culture shock, but they loved the atmosphere of problem solving and entrepreneurship that permeated the campus. Gia came to MIT in 1995 and majored in chemical engineering while Abe followed three years later and majored in mechanical engineering for both his bachelor’s and master’s degrees.

    All the while, they never lost sight of hydropower. In the 1998 MIT $100K Entrepreneurship Competitions (which was the $50K at the time), they pitched an idea for hydropower plants based on a linear turbine design. They were named finalists in the competition, but still wanted more industry experience before starting a company. After graduation, Abe worked as a mechanical engineer and did some consulting work with the operators of small hydropower plants while Gia worked at the energy desks of a few large finance companies.

    In 2009, the siblings, along with their late father, Daniel, received a small business grant of $200,000 and formally launched Natel Energy.

    Between 2009 and 2019, the founders worked on a linear turbine design that Abe describes as turbines on a conveyor belt. They patented and deployed the system on a few sites, but the problem of ensuring safe fish passage remained.

    Then the founders were doing some modeling that suggested they could achieve high power plant efficiency using an extremely rounded edge on a turbine blade — as opposed to the sharp blades typically used for hydropower turbines. The insight made them realize if they didn’t need sharp blades, perhaps they didn’t need a complex new turbine.

    “It’s so counterintuitive, but we said maybe we can achieve the same results with a propeller turbine, which is the most common kind,” Abe says. “It started out as a joke — or a challenge — and I did some modeling and rapidly realized, ‘Holy cow, this actually could work!’ Instead of having a powertrain with a decade’s worth of complexity, you have a powertrain that has one moving part, and almost no change in loading, in a form factor that the whole industry is used to.”

    The turbine Natel developed features thick blades that allow more than 99 percent of fish to pass through safely, according to third-party tests. Natel’s turbines also allow for the passage of important river sediment and can be coupled with structures that mimic natural features of rivers like log jams, beaver dams, and rock arches.

    “We want the most efficient machine possible, but we also want the most fish-safe machine possible, and that intersection has led to our unique intellectual property,” Gia says.

    Supercharging hydropower

    Natel has already installed two versions of its latest turbine, what it calls the Restoration Hydro Turbine, at existing plants in Maine and Oregon. The company hopes that by the end of this year, two more will be deployed, including one in Europe, a key market for Natel because of its stronger environmental regulations for hydropower plants.

    Since their installation, the founders say the first two turbines have converted more than 90 percent of the energy available in the water into energy at the turbine, a comparable efficiency to conventional turbines.

    Looking forward, Natel believes its systems have a significant role to play in boosting the hydropower industry, which is facing increasing scrutiny and environmental regulation that could otherwise close down many existing plants. For example, the founders say that hydropower plants the company could potentially retrofit across the U.S. and Europe have a total capacity of about 30 gigawatts, enough to power millions of homes.

    Natel also has ambitions to build entirely new plants on the many nonpowered dams around the U.S. and Europe. (Currently only 3 percent of the United States’ 80,000 dams are powered.) The founders estimate their systems could generate about 48 gigawatts of new electricity across the U.S. and Europe — the equivalent of more than 100 million solar panels.

    “We’re looking at numbers that are pretty meaningful,” Gia says. “We could substantially add to the existing installed base while also modernizing the existing base to continue to be productive while meeting modern environmental requirements.”

    Overall, the founders see hydropower as a key technology in our transition to sustainable energy, a sentiment echoed by recent MIT research.

    “Hydro today supplies the bulk of electricity reliability services in a lot of these areas — things like voltage regulation, frequency regulation, storage,” Gia says. “That’s key to understand: As we transition to a zero-carbon grid, we need a reliable grid, and hydro has a very important role in supporting that. Particularly as we think about making this transition as quickly as we can, we’re going to need every bit of zero-emission resources we can get.” More

  • in

    Fusion’s newest ambassador

    When high school senior Tuba Balta emailed MIT Plasma Science and Fusion Center (PSFC) Director Dennis Whyte in February, she was not certain she would get a response. As part of her final semester at BASIS Charter School, in Washington, she had been searching unsuccessfully for someone to sponsor an internship in fusion energy, a topic that had recently begun to fascinate her because “it’s not figured out yet.” Time was running out if she was to include the internship as part of her senior project.

    “I never say ‘no’ to a student,” says Whyte, who felt she could provide a youthful perspective on communicating the science of fusion to the general public.

    Posters explaining the basics of fusion science were being considered for the walls of a PSFC lounge area, a space used to welcome visitors who might not know much about the center’s focus: What is fusion? What is plasma? What is magnetic confinement fusion? What is a tokamak?

    Why couldn’t Balta be tasked with coming up with text for these posters, written specifically to be understandable, even intriguing, to her peers?

    Meeting the team

    Although most of the internship would be virtual, Balta visited MIT to meet Whyte and others who would guide her progress. A tour of the center showed her the past and future of the PSFC, one lab area revealing on her left the remains of the decades-long Alcator C-Mod tokamak and on her right the testing area for new superconducting magnets crucial to SPARC, designed in collaboration with MIT spinoff Commonwealth Fusion Systems.

    With Whyte, graduate student Rachel Bielajew, and Outreach Coordinator Paul Rivenberg guiding her content and style, Balta focused on one of eight posters each week. Her school also required her to keep a weekly blog of her progress, detailing what she was learning in the process of creating the posters.

    Finding her voice

    Balta admits that she was not looking forward to this part of the school assignment. But she decided to have fun with it, adopting an enthusiastic and conversational tone, as if she were sitting with friends around a lunch table. Each week, she was able to work out what she was composing for her posters and her final project by trying it out on her friends in the blog.

    Her posts won praise from her schoolmates for their clarity, as when in Week 3 she explained the concept of turbulence as it relates to fusion research, sending her readers to their kitchen faucets to experiment with the pressure and velocity of running tap water.

    The voice she found through her blog served her well during her final presentation about fusion at a school expo for classmates, parents, and the general public.

    “Most people are intimidated by the topic, which they shouldn’t be,” says Balta. “And it just made me happy to help other people understand it.”

    Her favorite part of the internship? “Getting to talk to people whose papers I was reading and ask them questions. Because when it comes to fusion, you can’t just look it up on Google.”

    Awaiting her first year at the University of Chicago, Balta reflects on the team spirit she experienced in communicating with researchers at the PSFC.

    “I think that was one of my big takeaways,” she says, “that you have to work together. And you should, because you’re always going to be missing some piece of information; but there’s always going to be somebody else who has that piece, and we can all help each other out.” More

  • in

    Explained: Why perovskites could take solar cells to new heights

    Perovskites hold promise for creating solar panels that could be easily deposited onto most surfaces, including flexible and textured ones. These materials would also be lightweight, cheap to produce, and as efficient as today’s leading photovoltaic materials, which are mainly silicon. They’re the subject of increasing research and investment, but companies looking to harness their potential do have to address some remaining hurdles before perovskite-based solar cells can be commercially competitive.

    The term perovskite refers not to a specific material, like silicon or cadmium telluride, other leading contenders in the photovoltaic realm, but to a whole family of compounds. The perovskite family of solar materials is named for its structural similarity to a mineral called perovskite, which was discovered in 1839 and named after Russian mineralogist L.A. Perovski.

    The original mineral perovskite, which is calcium titanium oxide (CaTiO3), has a distinctive crystal configuration. It has a three-part structure, whose components have come to be labeled A, B and X, in which lattices of the different components are interlaced. The family of perovskites consists of the many possible combinations of elements or molecules that can occupy each of the three components and form a structure similar to that of the original perovskite itself. (Some researchers even bend the rules a little by naming other crystal structures with similar elements “perovskites,” although this is frowned upon by crystallographers.)

    “You can mix and match atoms and molecules into the structure, with some limits. For instance, if you try to stuff a molecule that’s too big into the structure, you’ll distort it. Eventually you might cause the 3D crystal to separate into a 2D layered structure, or lose ordered structure entirely,” says Tonio Buonassisi, professor of mechanical engineering at MIT and director of the Photovoltaics Research Laboratory. “Perovskites are highly tunable, like a build-your-own-adventure type of crystal structure,” he says.

    That structure of interlaced lattices consists of ions or charged molecules, two of them (A and B) positively charged and the other one (X) negatively charged. The A and B ions are typically of quite different sizes, with the A being larger. 

    Within the overall category of perovskites, there are a number of types, including metal oxide perovskites, which have found applications in catalysis and in energy storage and conversion, such as in fuel cells and metal-air batteries. But a main focus of research activity for more than a decade has been on lead halide perovskites, according to Buonassisi says.

    Within that category, there is still a legion of possibilities, and labs around the world are racing through the tedious work of trying to find the variations that show the best performance in efficiency, cost, and durability — which has so far been the most challenging of the three.

    Many teams have also focused on variations that eliminate the use of lead, to avoid its environmental impact. Buonassisi notes, however, that “consistently over time, the lead-based devices continue to improve in their performance, and none of the other compositions got close in terms of electronic performance.” Work continues on exploring alternatives, but for now none can compete with the lead halide versions.

    One of the great advantages perovskites offer is their great tolerance of defects in the structure, he says. Unlike silicon, which requires extremely high purity to function well in electronic devices, perovskites can function well even with numerous imperfections and impurities.

    Searching for promising new candidate compositions for perovskites is a bit like looking for a needle in a haystack, but recently researchers have come up with a machine-learning system that can greatly streamline this process. This new approach could lead to a much faster development of new alternatives, says Buonassisi, who was a co-author of that research.

    While perovskites continue to show great promise, and several companies are already gearing up to begin some commercial production, durability remains the biggest obstacle they face. While silicon solar panels retain up to 90 percent of their power output after 25 years, perovskites degrade much faster. Great progress has been made — initial samples lasted only a few hours, then weeks or months, but newer formulations have usable lifetimes of up to a few years, suitable for some applications where longevity is not essential.

    From a research perspective, Buonassisi says, one advantage of perovskites is that they are relatively easy to make in the lab — the chemical constituents assemble readily. But that’s also their downside: “The material goes together very easily at room temperature,” he says, “but it also comes apart very easily at room temperature. Easy come, easy go!”

    To deal with that issue, most researchers are focused on using various kinds of protective materials to encapsulate the perovskite, protecting it from exposure to air and moisture. But others are studying the exact mechanisms that lead to that degradation, in hopes of finding formulations or treatments that are more inherently robust. A key finding is that a process called autocatalysis is largely to blame for the breakdown.

    In autocatalysis, as soon as one part of the material starts to degrade, its reaction products act as catalysts to start degrading the neighboring parts of the structure, and a runaway reaction gets underway. A similar problem existed in the early research on some other electronic materials, such as organic light-emitting diodes (OLEDs), and was eventually solved by adding additional purification steps to the raw materials, so a similar solution may be found in the case of perovskites, Buonassisi suggests.

    Buonassisi and his co-researchers recently completed a study showing that once perovskites reach a usable lifetime of at least a decade, thanks to their much lower initial cost that would be sufficient to make them economically viable as a substitute for silicon in large, utility-scale solar farms.

    Overall, progress in the development of perovskites has been impressive and encouraging, he says. With just a few years of work, it has already achieved efficiencies comparable to levels that cadmium telluride (CdTe), “which has been around for much longer, is still struggling to achieve,” he says. “The ease with which these higher performances are reached in this new material are almost stupefying.” Comparing the amount of research time spent to achieve a 1 percent improvement in efficiency, he says, the progress on perovskites has been somewhere between 100 and 1000 times faster than that on CdTe. “That’s one of the reasons it’s so exciting,” he says. More

  • in

    Getting the carbon out of India’s heavy industries

    The world’s third largest carbon emitter after China and the United States, India ranks seventh in a major climate risk index. Unless India, along with the nearly 200 other signatory nations of the Paris Agreement, takes aggressive action to keep global warming well below 2 degrees Celsius relative to preindustrial levels, physical and financial losses from floods, droughts, and cyclones could become more severe than they are today. So, too, could health impacts associated with the hazardous air pollution levels now affecting more than 90 percent of its population.  

    To address both climate and air pollution risks and meet its population’s escalating demand for energy, India will need to dramatically decarbonize its energy system in the coming decades. To that end, its initial Paris Agreement climate policy pledge calls for a reduction in carbon dioxide intensity of GDP by 33-35 percent by 2030 from 2005 levels, and an increase in non-fossil-fuel-based power to about 40 percent of cumulative installed capacity in 2030. At the COP26 international climate change conference, India announced more aggressive targets, including the goal of achieving net-zero emissions by 2070.

    Meeting its climate targets will require emissions reductions in every economic sector, including those where emissions are particularly difficult to abate. In such sectors, which involve energy-intensive industrial processes (production of iron and steel; nonferrous metals such as copper, aluminum, and zinc; cement; and chemicals), decarbonization options are limited and more expensive than in other sectors. Whereas replacing coal and natural gas with solar and wind could lower carbon dioxide emissions in electric power generation and transportation, no easy substitutes can be deployed in many heavy industrial processes that release CO2 into the air as a byproduct.

    However, other methods could be used to lower the emissions associated with these processes, which draw upon roughly 50 percent of India’s natural gas, 25 percent of its coal, and 20 percent of its oil. Evaluating the potential effectiveness of such methods in the next 30 years, a new study in the journal Energy Economics led by researchers at the MIT Joint Program on the Science and Policy of Global Change is the first to explicitly explore emissions-reduction pathways for India’s hard-to-abate sectors.

    Using an enhanced version of the MIT Economic Projection and Policy Analysis (EPPA) model, the study assesses existing emissions levels in these sectors and projects how much they can be reduced by 2030 and 2050 under different policy scenarios. Aimed at decarbonizing industrial processes, the scenarios include the use of subsidies to increase electricity use, incentives to replace coal with natural gas, measures to improve industrial resource efficiency, policies to put a price on carbon, carbon capture and storage (CCS) technology, and hydrogen in steel production.

    The researchers find that India’s 2030 Paris Agreement pledge may still drive up fossil fuel use and associated greenhouse gas emissions, with projected carbon dioxide emissions from hard-to-abate sectors rising by about 2.6 times from 2020 to 2050. But scenarios that also promote electrification, natural gas support, and resource efficiency in hard-to-abate sectors can lower their CO2 emissions by 15-20 percent.

    While appearing to move the needle in the right direction, those reductions are ultimately canceled out by increased demand for the products that emerge from these sectors. So what’s the best path forward?

    The researchers conclude that only the incentive of carbon pricing or the advance of disruptive technology can move hard-to-abate sector emissions below their current levels. To achieve significant emissions reductions, they maintain, the price of carbon must be high enough to make CCS economically viable. In that case, reductions of 80 percent below current levels could be achieved by 2050.

    “Absent major support from the government, India will be unable to reduce carbon emissions in its hard-to-abate sectors in alignment with its climate targets,” says MIT Joint Program deputy director Sergey Paltsev, the study’s lead author. “A comprehensive government policy could provide robust incentives for the private sector in India and generate favorable conditions for foreign investments and technology advances. We encourage decision-makers to use our findings to design efficient pathways to reduce emissions in those sectors, and thereby help lower India’s climate and air pollution-related health risks.” More

  • in

    Tapping into the million-year energy source below our feet

    There’s an abandoned coal power plant in upstate New York that most people regard as a useless relic. But MIT’s Paul Woskov sees things differently.

    Woskov, a research engineer in MIT’s Plasma Science and Fusion Center, notes the plant’s power turbine is still intact and the transmission lines still run to the grid. Using an approach he’s been working on for the last 14 years, he’s hoping it will be back online, completely carbon-free, within the decade.

    In fact, Quaise Energy, the company commercializing Woskov’s work, believes if it can retrofit one power plant, the same process will work on virtually every coal and gas power plant in the world.

    Quaise is hoping to accomplish those lofty goals by tapping into the energy source below our feet. The company plans to vaporize enough rock to create the world’s deepest holes and harvest geothermal energy at a scale that could satisfy human energy consumption for millions of years. They haven’t yet solved all the related engineering challenges, but Quaise’s founders have set an ambitious timeline to begin harvesting energy from a pilot well by 2026.

    The plan would be easier to dismiss as unrealistic if it were based on a new and unproven technology. But Quaise’s drilling systems center around a microwave-emitting device called a gyrotron that has been used in research and manufacturing for decades.

    “This will happen quickly once we solve the immediate engineering problems of transmitting a clean beam and having it operate at a high energy density without breakdown,” explains Woskov, who is not formally affiliated with Quaise but serves as an advisor. “It’ll go fast because the underlying technology, gyrotrons, are commercially available. You could place an order with a company and have a system delivered right now — granted, these beam sources have never been used 24/7, but they are engineered to be operational for long time periods. In five or six years, I think we’ll have a plant running if we solve these engineering problems. I’m very optimistic.”

    Woskov and many other researchers have been using gyrotrons to heat material in nuclear fusion experiments for decades. It wasn’t until 2008, however, after the MIT Energy Initiative (MITEI) published a request for proposals on new geothermal drilling technologies, that Woskov thought of using gyrotrons for a new application.

    “[Gyrotrons] haven’t been well-publicized in the general science community, but those of us in fusion research understood they were very powerful beam sources — like lasers, but in a different frequency range,” Woskov says. “I thought, why not direct these high-power beams, instead of into fusion plasma, down into rock and vaporize the hole?”

    As power from other renewable energy sources has exploded in recent decades, geothermal energy has plateaued, mainly because geothermal plants only exist in places where natural conditions allow for energy extraction at relatively shallow depths of up to 400 feet beneath the Earth’s surface. At a certain point, conventional drilling becomes impractical because deeper crust is both hotter and harder, which wears down mechanical drill bits.

    Woskov’s idea to use gyrotron beams to vaporize rock sent him on a research journey that has never really stopped. With some funding from MITEI, he began running tests, quickly filling his office with small rock formations he’d blasted with millimeter waves from a small gyrotron in MIT’s Plasma Science and Fusion Center.

    Woskov displaying samples in his lab in 2016.

    Photo: Paul Rivenberg

    Previous item
    Next item

    Around 2018, Woskov’s rocks got the attention of Carlos Araque ’01, SM ’02, who had spent his career in the oil and gas industry and was the technical director of MIT’s investment fund The Engine at the time.

    That year, Araque and Matt Houde, who’d been working with geothermal company AltaRock Energy, founded Quaise. Quaise was soon given a grant by the Department of Energy to scale up Woskov’s experiments using a larger gyrotron.

    With the larger machine, the team hopes to vaporize a hole 10 times the depth of Woskov’s lab experiments. That is expected to be accomplished by the end of this year. After that, the team will vaporize a hole 10 times the depth of the previous one — what Houde calls a 100-to-1 hole.

    “That’s something [the DOE] is particularly interested in, because they want to address the challenges posed by material removal over those greater lengths — in other words, can we show we’re fully flushing out the rock vapors?” Houde explains. “We believe the 100-to-1 test also gives us the confidence to go out and mobilize a prototype gyrotron drilling rig in the field for the first field demonstrations.”

    Tests on the 100-to-1 hole are expected to be completed sometime next year. Quaise is also hoping to begin vaporizing rock in field tests late next year. The short timeline reflects the progress Woskov has already made in his lab.

    Although more engineering research is needed, ultimately, the team expects to be able to drill and operate these geothermal wells safely. “We believe, because of Paul’s work at MIT over the past decade, that most if not all of the core physics questions have been answered and addressed,” Houde says. “It’s really engineering challenges we have to answer, which doesn’t mean they’re easy to solve, but we’re not working against the laws of physics, to which there is no answer. It’s more a matter of overcoming some of the more technical and cost considerations to making this work at a large scale.”

    The company plans to begin harvesting energy from pilot geothermal wells that reach rock temperatures at up to 500 C by 2026. From there, the team hopes to begin repurposing coal and natural gas plants using its system.

    “We believe, if we can drill down to 20 kilometers, we can access these super-hot temperatures in greater than 90 percent of locations across the globe,” Houde says.

    Quaise’s work with the DOE is addressing what it sees as the biggest remaining questions about drilling holes of unprecedented depth and pressure, such as material removal and determining the best casing to keep the hole stable and open. For the latter problem of well stability, Houde believes additional computer modeling is needed and expects to complete that modeling by the end of 2024.

    By drilling the holes at existing power plants, Quaise will be able to move faster than if it had to get permits to build new plants and transmission lines. And by making their millimeter-wave drilling equipment compatible with the existing global fleet of drilling rigs, it will also allow the company to tap into the oil and gas industry’s global workforce.

    “At these high temperatures [we’re accessing], we’re producing steam very close to, if not exceeding, the temperature that today’s coal and gas-fired power plants operate at,” Houde says. “So, we can go to existing power plants and say, ‘We can replace 95 to 100 percent of your coal use by developing a geothermal field and producing steam from the Earth, at the same temperature you’re burning coal to run your turbine, directly replacing carbon emissions.”

    Transforming the world’s energy systems in such a short timeframe is something the founders see as critical to help avoid the most catastrophic global warming scenarios.

    “There have been tremendous gains in renewables over the last decade, but the big picture today is we’re not going nearly fast enough to hit the milestones we need for limiting the worst impacts of climate change,” Houde says. “[Deep geothermal] is a power resource that can scale anywhere and has the ability to tap into a large workforce in the energy industry to readily repackage their skills for a totally carbon free energy source.” More