More stories

  • in

    Commercializing next-generation nuclear energy technology

    All of the nuclear power plants operating in the U.S. today were built using the same general formula. For one thing, companies made their reactors big, with power capacities measured in the hundreds of megawatts. They also relied heavily on funding from the federal government, which through large grants and lengthy application processes has dictated many aspects of nuclear plant design and development.
    That landscape has had varying degrees of success over the years, but it’s never been particularly inviting for new companies interested in deploying unique technologies.
    Now the startup Oklo is forging a new path to building innovative nuclear power plants that meet federal safety regulations. Earlier this year, the company became the first to get its application for an advanced nuclear reactor accepted by the U.S. Nuclear Regulatory Commission (NRC). The acceptance was the culmination of a novel application process that set a number of milestones in the industry, and it has positioned Oklo to build an advanced reactor that differs in several important ways from the nuclear power plants currently operating in the country.
    Conventional reactors use moderators like water to slow neutrons down before they split, or fission, uranium and plutonium atoms. Oklo’s reactors won’t use moderators, enabling the construction of much smaller plants and allowing neutrons to move faster.
    Faster-moving neutrons can sustain nuclear fission with a different type of fuel. Compared to traditional reactors, Oklo’s fuel source will be enriched with a much higher concentration of the uranium-235 isotope, which fissions more easily than the more common uranium-238. The added proportion of uranium-235 allows Oklo’s reactor to run for longer time periods without having to refuel.
    As a result of these differences, Oklo’s powerhouses will bear little resemblance to conventional nuclear plants. The company’s first reactor, dubbed the Aurora, is housed in an unassuming A-frame building that is hundreds of times smaller than traditional reactors, and it will run on used fuel recovered from an experimental reactor at the Idaho National Laboratory that was shut down in 1994. Oklo says the plant will run for 20 years without having to refuel in its lifetime.
    But perhaps the most unique aspect of Oklo is its approach to commercialization. In many ways, the Silicon Valley-based company has cultivated a startup mindset, eschewing government grants to raise smaller, venture capital-backed funding rounds and iterating on its designs as it moves through the application process much more quickly than its predecessors.
    “Newness was favorable because it shed some of the legacy inertia around how things have been done in the past, and I thought that was an important way of modernizing the commercial approach,” says Oklo CEO Jacob DeWitte SM ’11, PhD ’14, who co-founded the company with Caroline Cochran SM ’10.
    Now Oklo is hoping its progress will encourage others to pursue new approaches in the nuclear power industry.
    “If we can modernize the way we meet these regulations and take advantage of the benefits and characteristics of these next-gen designs, we can start to paint a whole new picture here,” DeWitte says.
    Charting a new path
    DeWitte came to MIT in 2008 and studied advanced reactors during work for his master’s degree. For his PhD, he considered ways to extend the lifetime and power output of the large reactors already in use around the world.
    But while DeWitte studied the big reactors of today, he was increasingly drawn to the idea of commercializing the small reactors of tomorrow.
    “At MIT, through the projects and extracurriculars, I learned more about how the energy ecosystem works, how the startup model works, how the venture finance model works, and with all these different pieces I started to formulate the idea that became the seed for Oklo,” DeWitte says.
    What DeWitte learned about the nuclear power landscape was not particularly encouraging for startups. The industry is plagued with stories of plant construction taking a decade or more, with cost overruns in the billions.
    In the U.S., the Nuclear Regulatory Commission sets design standards for reactors and issues guidance for meeting those standards. But the guidance was created for the large reactors that have been the norm in the industry for more than 50 years, making it poorly suited to help companies interested in building smaller reactors based on different technology.
    DeWitte began thinking about starting an advanced nuclear company while he was still a PhD student. In 2013 he partnered with Cochran and others from MIT, and the team participated in the MIT $100K Entrepreneurship Competition and the MIT Clean Energy Prize, where Oklo got early feedback and validation, including winning the energy track of the $100K.
    Oklo’s reactor design changed considerably over the years as DeWitte and Cochran — the only co-founders to stick with the company — worked first with advisors at MIT, then with industry experts, and eventually with officials at the NRC.
    “The idea was if we take this technology, we start small and use an iterative approach to tech development and a product focused approach, kind of like what Tesla did with the Roadster [electric car model] before moving to others,” DeWitte says. “That seemed to yield an interesting way of getting some initial validation points and could be done at a higher cost efficiency, so less cash needed, and that could incrementally fit with the venture capital financing model.”
    Oklo raised small funding rounds in 2013 and 2014 as the company went through the MassChallenge and Y Combinator startup accelerators.
    In 2016, the Department of Energy (DOE) did some innovating of its own, beginning an industry-led effort to build new approval processes for advanced nuclear reactor applications. Two years later, Oklo piloted the new structure. The process resulted in Oklo developing a novel application and becoming the first company to get a combined license application to build a power plant accepted by the NRC since 2009.
    “We had to look at regulations with a fresh eye and not through the distortion of everything that had been done in the past,” DeWitte says. “In other words, we had to find more efficient ways to meet the regulations.”
    Leading by example
    Oklo’s first reactor will generate 1.5 megawatts of electric energy, although later versions of the company’s reactor could generate much more.
    The company’s first reactor will also use a unique uranium fuel source provided by the Idaho National Laboratory. Natural uranium consists of more than 99 percent uranium-238 and about 0.7 percent uranium-235. In conventional nuclear reactors, uranium is enriched to include up to 5 percent uranium-235. The uranium fuel in Oklo’s reactors will be enriched to include between 5 and 20 percent uranium-235.
    Because Oklo’s reactors will be able to operate for years without refueling, DeWitte says they’re particularly well-suited for remote areas that often rely on environmentally harmful diesel fuel.
    Oklo isn’t committing to an exact timeline for construction, but the co-founders have said they expect the reactor to be operational in the early 2020s. DeWitte says it will serve as a proof of concept. Oklo is already talking with potential customers about additional plants.
    DeWitte has said later versions of its plants could run for 40 years or more without needing to refuel.
    For now, though, DeWitte is hoping Oklo’s progress can inspire the industry to rethink the way it brings new technologies to market.
    “[Oklo’s progress] opens the door up to say nuclear innovation is alive and well,” DeWitte says. “And it’s not just the technology, it’s the full stack: It’s technology, regulations, manufacturing, business models, financing models, etc. So being able to get these milestones and do it in an unprecedented manner is really significant because it shows there are more pathways for nuclear to get to market.” More

  • in

    Power-free system harnesses evaporation to keep items cool

    Camels have evolved a seemingly counterintuitive approach to keeping cool while conserving water in a scorching desert environment: They have a thick coat of insulating fur. Applying essentially the same approach, researchers at MIT have now developed a system that could help keep things like pharmaceuticals or fresh produce cool in hot environments, without the need for a power supply.
    Most people wouldn’t think of wearing a camel-hair coat on a hot summer’s day, but in fact many desert-dwelling people do tend to wear heavy outer garments, for essentially the same reason. It turns out that a camel’s coat, or a person’s clothing, can help to reduce loss of moisture while at the same time allowing enough sweat evaporation to provide a cooling effect. Tests have showed that a shaved camel loses 50 percent more moisture than an unshaved one, under identical conditions, the researchers say.
    The new system developed by MIT engineers uses a two-layer material to achieve a similar effect. The material’s bottom layer, substituting for sweat glands, consists of hydrogel, a gelatin-like substance that consists mostly of water, contained in a sponge-like matrix from which the water can easily evaporate. This is then covered with an upper layer of aerogel, playing the part of fur by keeping out the external heat while allowing the vapor to pass through.
    Hydrogels are already used for some cooling applications, but field tests and detailed analysis have shown that this new two-layer material, less than a half-inch thick, can provide cooling of more than 7 degrees Celsius for five times longer than the hydrogel alone — more than eight days versus less than two.
    The findings are being reported today in a paper in the journal Joule, by MIT postdoc Zhengmao Lu, graduate students Elise Strobach and Ningxin Chen, Research Scientist Nicola Ferralis and Professor Jeffrey Grossman, head of the Department of Materials Science and Engineering.
    The system, the researchers say, could be used for food packaging to preserve freshness and open up greater distribution options for farmers to sell their perishable crops. It could also allow medicines such as vaccines to be kept safely as they are delivered to remote locations. In addition to providing cooling, the passive system, powered purely by heat, can reduce the variations in temperature that the goods experience, eliminating spikes that can accelerate spoilage.
    Ferralis explains that such packaging materials could provide constant protection of perishable foods or drugs all the way from the farm or factory, through the distribution chain, and all the way to the consumer’s home. In contrast, existing systems that rely on refrigerated trucks or storage facilities may leave gaps where temperature spikes can happen during loading and unloading. “What happens in just a couple of hours can be very detrimental to some perishable foods,” he says.
    The basic raw materials involved in the two-layer system are inexpensive — the aerogel is made of silica, which is essentially beach sand, cheap and abundant. But the processing equipment for making the aerogel is large and expensive, so that aspect will require further development in order to scale up the system for useful applications. But at least one startup company is already working on developing such large-scale processing to use the material to make thermally insulating windows.
    The basic principle of using the evaporation of water to provide a cooling effect has been used for centuries in one form or another, including the use of double-pot systems for food preservation. These use two clay pots, one inside the other, with a layer of wet sand in between. Water evaporates from the sand out through the outer pot, leaving the inner pot cooler. But the idea of combining such evaporative cooling with an insulating layer, as camels and some other desert animals do, has not really been applied to human-designed cooling systems before.
    For applications such as food packaging, the transparency of the hydrogel and aerogel materials is important, allowing the condition of the food to be clearly seen through the package. But for other applications such as pharmaceuticals or space cooling, an opaque insulating layer could be used instead, providing even more options for the design of materials for specific uses, says Lu, who was the paper’s lead author.
    The hydrogel material is composed of 97 percent water, which gradually evaporates away. In the experimental setup, it took 200 hours for a 5-millimeter layer of hydrogel, covered with 5 millimeters of aerogel, to lose all its moisture, compared to 40 hours for the bare hydrogel. The two-layered material’s cooling level was slightly less — a reduction of 7 degrees Celsius (about 12.6 degrees Fahrenheit) versus 8 C (14.4 F) — but the effect was much longer-lasting. Once the moisture is gone from the hydrogel, the material can then be recharged with water so the cycle can begin again.
    Especially in developing countries where access to electricity is often limited, Lu says, such materials could be of great benefit. “Because this passive cooling approach does not rely on electricity at all, this gives you a good pathway for storage and distribution of those perishable products in general,” he says. More

  • in

    Pushing the envelope with fusion magnets

    “At the age of between 12 and 15 I was drawing; I was making plans of fusion devices.”
    David Fischer remembers growing up in Vienna, Austria, imagining how best to cool the furnace used to contain the hot soup of ions known as plasma in a fusion device called a tokamak. With plasma hotter than the core of the sun being generated in a donut-shaped vacuum chamber just a meter away from these magnets, what temperature ranges might be possible with different coolants, he wondered.
    “I was drawing these plans and showing them to my father,” he recalls. “Then somehow I forgot about this fusion idea.”
    Now starting his second year at the MIT Plasma Science and Fusion Center (PSFC) as a postdoc and a new Eni-sponsored MIT Energy Fellow, Fischer has clearly reconnected with the “fusion idea.” And his research revolves around the concepts that so engaged him as a youth.
    Fischer’s early designs explored a popular approach to generating carbon-free, sustainable fusion energy known as “magnetic confinement.” Since plasma responds to magnetic fields, the tokamak is designed with magnets to keep the fusing atoms inside the vessel and away from the metal walls, where they would cause damage. The more effective the magnetic confinement the more stable the plasma can become, and the longer it can be sustained within the device.
    Fischer is working on ARC, a fusion pilot plant concept that employs thin high-temperature superconductor (HTS) tapes in the fusion magnets. HTS allows much higher magnetic fields than would be possible from conventional superconductors, enabling a more compact tokamak design. HTS also allows the fusion magnets to operate at higher temperatures, greatly reducing the required cooling.
    Fischer is particularly interested in how to keep the HTS tapes from degrading. Fusion reactions create neutrons, which can damage many parts of a fusion device, with the strongest effect on components closest to the plasma. Although the superconducting tapes may be as much as a meter away from the first wall of the tokamak, neutrons can still reach them. Even in reduced numbers and after losing most of their energy, the neutrons damage the microstructure of the HTS tape and over time change the properties of the superconducting magnets.
    Much of Fischer’s focus is devoted to the effect of irradiation damage on the critical currents, the maximum electrical current that can pass through a superconductor without dissipating energy. If irradiation causes the critical currents to degrade too much, the fusion magnets can no longer produce the high magnetic fields necessary to confine and compress the plasma.
    Fischer notes that it is possible to reduce damage to the magnets almost completely by adding more shielding between the magnets and the fusion plasma. However, this would require more space, which comes at a premium in a compact fusion power plant.
    “You can’t just put infinite shielding in between. You have to learn first how much damage can this superconductor tolerate, and then determine how long do you want the fusion magnets to last. And then design around these parameters.”
    Fischer’s expertise with HTS tapes stems from studies at Technische Universität Wien (Vienna University of Technology), Austria. Working on his master’s degree in the low temperature physics group, he was told that a PhD position was available researching radiation damage on coated conductors, materials that could be used for fusion magnets.
    Recalling the drawings he shared with his father, he thought, “Oh, that’s interesting. I was attracted to fusion more than 10 years ago. Yeah, let’s do that.”
    The resulting research on the effects of neutron irradiation on high-temperature superconductors for fusion magnets, presented at a workshop in Japan, got the attention of PSFC nuclear science and engineering Professor Zach Hartwig and Commonwealth Fusion Systems Chief Science Officer Brandon Sorbom.
    “They lured me in,” he laughs.
    Like Fischer, Sorbom had explored in his own dissertation the effect of radiation damage on the critical current of HTS tapes. What neither researcher had the opportunity to examine was how the tapes behave when irradiated at 20 kelvins, the temperature at which the HTS fusion magnets will operate.
    Fischer now finds himself overseeing a proton irradiation laboratory for PSFC Director Dennis Whyte. He is building a device that will not only allow him to irradiate the superconductors at 20 K, but also immediately measure changes in the critical currents.
    He is glad to be back in the NW13 lab, fondly known as “The Vault,” working safely with graduate and Undergraduate Research Opportunities Program student assistants. During his Covid-19 lockdown, he was able to work from home on programming a measurement software, but he missed the daily connection with his peers.
    “The atmosphere is very inspiring,” he says, noting some of the questions his work has recently stimulated. “What is the effect of the irradiation temperature? What are the mechanisms for the degradation of the critical currents? Could we design HTS tapes that are more radiation resistant? Is there a way to heal radiation damage?”
    Fischer may have the chance to explore some of his questions as he prepares to coordinate the planning and design of a new neutron irradiation facility at MIT.
    “It’s a great opportunity for me,” he says. “It’s great to be responsible for a project now, and see that people trust that you can make it work.” More

  • in

    3 Questions: Fatih Birol on post-Covid trajectories in energy and climate

    As part of the MIT Energy Initiative’s (MITEI) distinguished colloquium series, Fatih Birol, the executive director of the International Energy Agency (IEA), recently shared his perspective on trajectories in global energy markets and climate trends post-Covid-19 and discussed emerging developments that make him optimistic about how quickly the world may shift to cleaner energy and achieve international decarbonization goals. Here, Birol talks to MITEI about key takeaways from his talk.
    Q: How has the Covid-19 pandemic impacted global energy markets?
    A: Covid-19 has already delivered the biggest shock to global energy markets since the Great Depression. Global energy demand is set to decline by 6 percent, which is many times greater than the fall during the 2009 financial crisis. Oil has been hardest hit, with demand set to fall by 8.4 million barrels per day, year-on-year, based on a resurgence of Covid-19 cases, local lockdown measures, and weak aviation. Natural gas and coal have also seen strong declines, and, while renewables have been more resilient, they, too, are under pressure.
    The crisis is still with us, so it’s too early to draw any definitive conclusions about the long-term implications for energy and climate trends. The extent to which governments prioritize clean energy in their economic recovery plans will make a huge difference. The IEA’s Sustainable Recovery Plan, which we released in June, shows how smart policies and targeted investments can boost economic growth, create jobs, and put global greenhouse gas emissions into decline.
    Q: What trends in technology, policy, and economics have the most potential to curb climate change and ensure universal energy access?
    A: Five recent emerging developments are making me increasingly optimistic about how quickly the world may shift to cleaner energy and achieve the kind of structural declines in greenhouse gas emissions that are needed to achieve international climate and sustainable energy goals.
    The first is the way solar is leading renewables to new heights — it has now become the least-expensive option in many economies, and new projects are springing up fast all over the world. Solar also has huge potential to help increase access to energy, especially in Africa, where hundreds of millions of people still lack basic access to electricity.
    The massive easing of monetary policy by central banks in response to the pandemic means that wind, solar, and electric vehicles should benefit from ultra-low interest rates for an extended period in some regions of the world. We need to find ways for all countries to access this cheaper capital.
    At the same time, more governments are throwing their weight behind clean energy technologies, which was made clear by the number of energy ministers (40!) from nations around the world who took part in the IEA Clean Energy Transitions Summit in July.
    More companies are stepping up their ambitions, from major oil firms committing to transform themselves into lower-carbon businesses to leading tech companies putting increasing resources into renewables and energy storage.
    Lastly, I see encouraging momentum in innovation, which will be essential for scaling up the clean energy technologies we need — like hydrogen and carbon capture — quickly enough to make a difference.
    Q: What are the greatest challenges to the clean energy transition, and how can we overcome them?
    A: Getting more countries and companies on board with the promising trends I just mentioned will be vital. Greater efforts need to be devoted to supporting fair, inclusive clean energy futures for all parts of the world.
    One figure highlights the scale of the challenge in the energy industry: the oil companies that have pledged to achieve net-zero carbon emissions produce less than 10 percent of the global oil output. There’s a lot of work to be done there.
    We also have to make sure clean energy transitions don’t leave anyone behind. As I mentioned, energy poverty is still a huge issue in Africa — we need innovative solutions to address this problem, especially since many African economies are now struggling financially, with some even facing full-blown debt crises, as a result of the global recession.
    Perhaps the biggest technological challenge we face is tackling emissions from existing infrastructure — the vast fleets of inefficient coal plants, steel mills, and cement factories. These are mostly young assets in emerging Asia and could continue operating for decades more. Without addressing their emissions, we will have no chance of meeting our climate and energy goals. Our recent report, “Energy Technology Perspectives 2020,” takes a deep dive into this challenge and maps out the clean energy technologies that can overcome it. Innovation will be vital, and governments will need to play a decisive role. More

  • in

    A controllable membrane to pull carbon dioxide out of exhaust streams

    A new system developed by chemical engineers at MIT could provide a way of continuously removing carbon dioxide from a stream of waste gases, or even from the air. The key component is an electrochemically assisted membrane whose permeability to gas can be switched on and off at will, using no moving parts and relatively little energy.
    The membranes themselves, made of anodized aluminum oxide, have a honeycomb-like structure made up of hexagonal openings that allow gas molecules to flow in and out when in the open state. However, gas passage can be blocked when a thin layer of metal is electrically deposited to cover the pores of the membrane. The work is described today in the journal Science Advances, in a paper by Professor T. Alan Hatton, postdoc Yayuan Liu, and four others.
    This new “gas gating” mechanism could be applied to the continuous removal of carbon dioxide from a range of industrial exhaust streams and from ambient air, the team says. They have built a proof-of-concept device to show this process in action.
    The device uses a redox-active carbon-absorbing material, sandwiched between two switchable gas gating membranes. The sorbent and the gating membranes are in close contact with each other and are immersed in an organic electrolyte to provide a medium for zinc ions to shuttle back and forth. These two gating membranes can be opened or closed electrically by switching the polarity of a voltage between them, causing ions of zinc to shuttle from one side to the other. The ions simultaneously block one side, by forming a metallic film over it, while opening the other, by dissolving its film away.
    When the sorbent layer is open to the side where the waste gases are flowing by, the material readily soaks up carbon dioxide until it reaches its capacity. The voltage can then be switched to block off the feed side and open up the other side, where a concentrated stream of nearly pure carbon dioxide is released.
    By building a system with alternating sections of membrane that operate in opposite phases, the system would allow for continuous operation in a setting such as an industrial scrubber. At any one time, half of the sections would be absorbing the gas while the other half would be releasing it.
    “That means that you have a feed stream coming into the system at one end and the product stream leaving from the other in an ostensibly continuous operation,” Hatton says. “This approach avoids many process issues” that would be involved in a traditional multicolumn system, in which adsorption beds alternately need to be shut down, purged, and then regenerated, before being exposed again to the feed gas to begin the next adsorption cycle. In the new system, the purging steps are not required, and the steps all occur cleanly within the unit itself.
    The researchers’ key innovation was using electroplating as a way to open and close the pores in a material. Along the way the team had tried a variety of other approaches to reversibly close pores in a membrane material, such as using tiny magnetic spheres that could be positioned to block funnel-shaped openings, but these other methods didn’t prove to be efficient enough. Metal thin films can be particularly effective as gas barriers, and the ultrathin layer used in the new system requires a minimal amount of the zinc material, which is abundant and inexpensive.
    “It makes a very uniform coating layer with a minimum amount of materials,” Liu says. One significant advantage of the electroplating method is that once the condition is changed, whether in the open or closed position, it requires no energy input to maintain that state. Energy is only required to switch back again.
    Potentially, such a system could make an important contribution toward limiting emissions of greenhouse gases into the atmosphere, and even direct-air capture of carbon dioxide that has already been emitted.
    While the team’s initial focus was on the challenge of separating carbon dioxide from a stream of gases, the system could actually be adapted to a wide variety of chemical separation and purification processes, Hatton says.
    “We’re pretty excited about the gating mechanism. I think we can use it in a variety of applications, in different configurations,” he says. “Maybe in microfluidic devices, or maybe we could use it to control the gas composition for a chemical reaction. There are many different possibilities.”
    The research team included graduate student Chun-Man Chow, postdoc Katherine Phillips, and recent graduates Miao Wang PhD ’20 and Sahag Voskian PhD ’19. This work was supported by ExxonMobil through the MIT Energy Initiative. More

  • in

    MIT.nano receives LEED Platinum certification

    MIT.nano, the Institute’s central, shared-access research facility for nanoscience and nanotechnology, has received the U.S. Green Building Council’s LEED Platinum certification for sustainable practices in new construction.
    The Leadership in Energy and Environmental Design (LEED) designation is a performance-based rating system of a building’s environmental attributes associated with its design, construction, operations, and management.
    For a leading-edge research center like MIT.nano — which consumes significantly more energy per square foot than a typical office building or traditional laboratory — earning the council’s highest designation of platinum is a remarkable achievement. “MIT.nano’s LEED Platinum certification demonstrates that even the most technically sophisticated buildings can mitigate their environmental impact if sustainability is a priority in the design and construction process,” says Vladimir Bulović, the founding faculty director of MIT.nano and the Fariborz Maseeh Chair in Emerging Technology. “A shared commitment to sustainable principles from the outset made this recognition possible.”
    Starting in 2016, MIT made a commitment that all new campus construction and major renovation projects must earn at least LEED Gold certification. MIT.nano joins the Morris and Sophie Chang Building (Building E52) as the second LEED Platinum-certified building on campus. There are 18 total LEED-certified spaces and buildings at MIT.
    Recognition is nothing new for the facility, as MIT.nano also received the International Institute for Sustainable Laboratories (I2SL) 2019 “Go Beyond” Award for excellence in sustainability in laboratory and other high-technology facility projects, as well as the R&D World 2019 Lab of the Year Award for excellence in research lab design, planning, and construction, and the AIA New England Honor Award for Design Excellence.
    Opportunity beyond the nanoscale
    Referred to as the “ship in the bottle” during construction, MIT.nano faced unique challenges due to its location. The building had to rise in the center of a dense urban campus, surrounded on all sides by existing buildings, with very limited access for construction activity and materials. Though constructing the facility was a challenge, the location provides considerable opportunities to connect nanotechnology research to other disciplines and spur new ideas through proximity.
    This same mix of challenge and opportunity fueled MIT’s pursuit of its LEED Platinum designation for MIT.nano. Facilities like MIT.nano are resource-intensive: Specialized environments like clean rooms require continuous air exchange, powerful air filtration, precise control and monitoring of temperature and humidity, and other high-energy infrastructure systems to support the diversity of pioneering tools and equipment used.
    But the heavy energy requirements of such systems provided a unique opportunity for gains in efficiency. “The energy consumption per square foot of a semiconductor clean room is about an order of magnitude higher than a typical office building. As a result, there is incredible opportunity for innovation during the design process and optimization post occupancy,” says MIT.nano Assistant Director of Infrastructure Nicholas Menounos.
    Menounos credits an effort by MIT — including the Department of Facilities and Campus Construction — and the design engineers that went well beyond the typical LEED process. “There was no precedent for a research and development facility of this size, so the team toured around the country, benchmarking against more than 12 peer institutions, to ensure we right-sized the process utilities and HVAC systems,” he says. “Oversizing leads to inefficiencies and undersizing reduces the useful life of the space. This was not a trivial task, and a major reason for the awards.”
    Going for platinum
    At all levels (certified, silver, gold, and platinum), the LEED certification process is based on a number of points that correlate to sustainability measures. MIT.nano earned points across all seven sections on the LEED scorecard: location and transportation, sustainable sites, water efficiency, energy and atmosphere, materials and resources, indoor environmental quality, innovation, and regional priority. The building notched 84 points total, with 80 points or more needed to earn platinum certification.
    MIT.nano rated highly in several categories, including optimizing energy performance, water use reduction, indoor environmental quality, and innovation in design. The building’s overall efficiency is supported by extensive indoor environmental controls and monitoring systems. The clean room, for instance, senses user occupancy with motion and particle detectors and adjusts air recirculation rates accordingly.
    “MIT.nano is the most technically complex building on campus with thousands of monitoring points spread throughout the facility,” explains Dennis Grimard, managing director at MIT.nano, in a recent MIT News article. “These points help maintain MIT.nano’s sustainability goals by constantly monitoring the building’s health and operation.”
    Those controls account for energy efficiency and stability of research as well as the comfort of occupants. With LEED certification also focused on the health, safety, and well-being of people, additional points were earned through the building’s maximization of open space, use of low-emitting materials, and design efforts to increase natural light throughout the building. “One thing you notice as an occupant is this deep natural light, which isn’t common in labs. While this saves building wattage, it also improves comfort and makes the building a pleasure to be in,” says Menounos.
    MIT.nano’s LEED strategy was amplified by MIT’s Central Utilities Plant (CUP), which has a symbiotic relationship with the new facility. The CUP has the opportunity to reuse MIT.nano’s reverse osmosis water in its cooling systems, while MIT.nano relies on the CUP’s distributed energy resource for both thermal and electric energy.
    Although the LEED Platinum certification may mark the culmination of a scoring procedure for MIT.nano’s design and construction, Menounos says it does not mark the end of sustainability work and optimization within the building. “Sustainability isn’t a moment in time, it’s a process. Now that MIT.nano is operational, we will continually try to find ways we can change and optimize how the building operates,” he says. More

  • in

    3 Questions: The price of privacy in ride-sharing app performance

    Ride-sharing applications such as Uber and Lyft collect information about a user’s location to improve service and efficiency, but as data breaches and misuse become more frequent, the exposure of user data is of increasing concern. M. Elena Renda, a visiting research scientist in MIT’s JTL Urban Mobility Lab; Francesca Martelli, a researcher at the National Research Council in Pisa, Italy; and Jinhua Zhao, the director of the JTL Urban Mobility Lab; discuss findings from their recent article in the Journal of Urban Technology about the impacts of different degrees of locational privacy protection on the quality of ride-sharing, or “mobility-sharing,” services. Zhao is also director of the MIT Mobility Initiative, co-director of the MIT Energy Initiative’s (MITEI) Mobility Systems Center, and an associate professor of urban studies and planning. This research was supported by the Mobility Systems Center, one of MITEI’s Low-Carbon Energy Centers.
    Q: What does your research tell us about the trade-offs in protecting a user’s locational privacy and the performance of ride-sharing applications?
    A: By providing mobility-sharing applications with both spatial and temporal data on their activities, users could reveal personal habits, preferences, and behaviors. Masking location data in order to avoid the identification of users in case of data leakage, misusage, and/or security breaches increases user privacy. However, the loss of information can decrease data utility and lead to poorer quality of service, or lower efficiency, in a location-based system.
    Our research focuses on mobility-sharing applications that hold promise for improving the efficiency of transportation and reducing vehicle miles traveled (VMT). In our study, we ask: How would location privacy-preserving techniques affect the performance of such applications, and more importantly, the aspects that most impact passengers, such as waiting time, VMT, and so on? The study compares different methods for masking data and different levels of location data anonymization, and provides useful insights into the trade-off between user privacy and the performance of mobility-sharing applications.
    We specifically analyzed the case of carpooling between home and work, which is the largest contributor to traffic congestion and air pollution. The analyses allow a careful quantification of the effects of different privacy-preservation techniques on total saved mileage, showing that better savings can be obtained if users agree to trade convenience for privacy — more in terms of travel time than waiting time. For instance, by masking locations within a 200-meter radius, the total saved mileage decreases on average by 15 percent over the optimal solution with exact location information, while travel time for users increases by five minutes on average. Thus, by compromising on convenience, it is possible to preserve privacy while only minimally impacting total traveled mileage. This observation might be especially useful for city authorities and policy makers seeking a good compromise between their citizens’ individual right to privacy and the societal need to reduce VMT and energy consumption. For instance, introducing more flexibility in working hours could facilitate the above compromise in urban contexts.
    Q: How does the cost of privacy affect a mobility-sharing system’s carbon footprint?
    A: In our study, we compared the number of shared miles that would be obtained by optimally matching trips using exact location information with those obtained through increasingly anonymized data. We found that the higher the level of privacy that is granted to users, the fewer the shared miles: The percentage of shared miles decreases from 10 percent with minimal privacy preservation, up to 60 percent with the stricter privacy preservation policies. The values in between depend not only on the levels of location data anonymization considered, but also on the amount of discomfort we are giving to users (for example, longer riding and waiting times). In a nutshell, the cost of privacy in terms of increased carbon footprint might be very high, and it should be carefully balanced with city-level and societal-level sustainability targets.
    Q: What next steps are you considering for your research, and how does your research support the decarbonization of the transportation sector?
    A: Currently, users grant whole-data ownership and rights to these application companies, since otherwise they would not be able to use their services. If this scenario changes (for example, in response to new regulations), companies might start offering users benefits and rewards (for example, lower cost, higher priority, or higher score) to nudge them to fully or partially opt out from a “privacy option.” This would allow the system to fully access their location data or reduce the level of privacy users were initially granted. If the user could set a desired level of privacy or decide not to require any privacy at all, this would lead to different levels of data privacy within the same privacy-preserving system. Performing tests on the sensitivity of the system efficiency and quality of service with respect to the percentage of riders requesting privacy controls and the geographical distribution of those riders could be an interesting research direction to investigate.
    Furthermore, the extent to which data privacy is perceived as a concern by shared mobility users is still largely unknown. Would users accept rewards and benefits from the companies to totally or partially relinquish their privacy rights?
    Recently, another major factor potentially disrupting the shared mobility market has appeared and spread worldwide: the Covid-19 pandemic. How could this impact shared mobility? What if people keep social distancing in the long term and drastically change their mobility patterns? What if citizens worldwide adopt the view that owning a car and driving alone (or at most, with family members) is the safest way for their health to move within and among cities, to the detriment of shared mobility modes, such as carpooling, ride-hailing, ride-sharing, or car-sharing? Failing to anticipate and address these worst-case scenarios could lead to rising traffic and congestion, which in turn will harm the environment and public health. Our plan is to investigate to what extent people are willing to use smart mobility systems post-Covid-19, and to what extent health concerns and location data privacy could be an issue. More

  • in

    Superconductor technology for smaller, sooner fusion

    Scientists have long sought to harness fusion as an inexhaustible and carbon-free energy source. Within the past few years, groundbreaking high-temperature superconductor technology (HTS) sparked a new vision for achieving practical fusion energy. This approach, known as the high-field pathway to fusion, aims to generate fusion in compact devices on a shorter timescale and lower cost than alternative approaches.
    A key technical challenge to realizing this vision, though, has been getting HTS superconductors to work in an integrated way in the development of new, high-performance superconducting magnets, which will enable higher magnetic fields than previous generations of magnets, and are central to confining and controlling plasma reactions.
    Now a team led by MIT’s Plasma Science and Fusion Center (PSFC) and MIT spinout company Commonwealth Fusion Systems (CFS), has developed and extensively tested an HTS cable technology that can be scaled and engineered into the high-performance magnets. The team’s research was published on Oct. 7 in Superconductor Science and Technology. Researchers included MIT assistant professor and principal investigator Zachary Hartwig; PSFC Deputy Head of Engineering Rui F. Vieira and other key PSFC technical and engineering staff; CFS Chief Science Officer Brandon Sorbom PhD ’17 and other CFS engineers; and scientists at CERN in Geneva, Switzerland, and at the Robinson Research Institute at Victoria University of Wellington, New Zealand. 
    This development follows a recent boost to the high-field pathway, when 47 researchers from 12 institutions published seven papers in the Journal of Plasma Physics, showing that a high-field fusion device, called SPARC, built with such magnets would produce net energy — more energy than it consumes — something never previously demonstrated.
    “The cable technology for SPARC is an important piece of the puzzle as we work to accelerate the timeline of achieving fusion energy,” says Hartwig, assistant professor of nuclear science and engineering, and leader of the research team at the PSFC. “If we’re successful in what we’re doing and in other technologies, fusion energy will start to make a difference in mitigating climate change — not in 100 years, but in 10 years.”
    A super cable
    The innovative technology described in the paper is a superconducting cable that conducts electricity with no resistance or heat generation and that will not degrade under extreme mechanical, electrical, and thermal conditions. Branded VIPER (an acronymic feat that stands for Vacuum Pressure Impregnated, Insulated, Partially transposed, Extruded, and Roll-formed), it consists of commercially produced thin steel tapes coated with HTS compound — yttrium-barium-copper-oxide — that are packaged into an assembly of copper and steel components to form the cable. Cryogenic coolant, such as supercritical helium, can flow easily through the cable to remove heat and keep the cable cold even under challenging conditions.
    “One of our advances was figuring out a way to solder the HTS tape inside the cable, effectively making it a monolithic structure where everything is thermally connected,” says Sorbom. Yet VIPER can also be fashioned into twists and turns, using joints to create “almost any type of geometry,” he adds. This makes the cable an ideal building material for winding into coils capable of generating and containing magnetic fields of enormous strength, such as those required to make fusion devices substantially smaller than presently envisioned net-energy fusion devices.
    Resilient and robust
    “The key thing we can do with VIPER cable is make a magnetic field two to three times stronger at the size required than the present generation of superconducting magnet technology,” Hartwig says. The magnitude of the magnetic field in tokamaks plays a strong nonlinear role in determining plasma performance. For example, fusion power density scales as magnetic field to the fourth power: Doubling the field increases fusion power by 16 times or, conversely, the same fusion output power can be achieved in a device 16 times smaller by volume.
    “In the development of high field magnets for fusion, HTS cables are an essential ingredient, and they’ve been missing,” says Soren Prestemon, director of the U.S. Magnet Development Program at the Lawrence Berkeley National Laboratory, who was not involved with this research. “VIPER is a breakthrough in the area of cable architecture — arguably the first candidate to be proven viable for fusion — and will enable the critical step forward to demonstration in a fusion reactor.” 
    VIPER technology also presents a powerful approach to a particular problem in the superconducting magnet field, called a quench, “that has terrified engineers since they started building superconducting magnets,” says Hartwig. A quench is a drastic temperature increase that occurs when the cold cables can no longer conduct electrical current without any resistance. When quench occurs, instead of generating almost zero heat in the superconducting state, the electrical current generates substantial resistive heating in the cable.
    “The rapid temperature rise can cause the magnet to potentially damage or destroy itself if the electrical current is not shut off,” says Hartwig.  “We want to avoid this situation or, if not, at least know about it as quickly and certainly as possible.”
    The team incorporated two types of temperature-sensing fiber optic technology developed by collaborators at CERN and Robinson Research Institute. The fibers exhibited — for the first time on full-scale HTS cables and in representative conditions of high-magnetic field fusion magnets — sensitive and high-speed detection of temperature changes along the cable to monitor for the onset of quench.
    Another key result was the successful incorporation of easily fabricated, low-electrical resistance, and mechanically robust joints between VIPER cables. Superconducting joints are often complex, challenging to make, and more likely to fail than others parts of a magnet; VIPER was designed to eliminate these issues. The VIPER joints have the additional advantage of being demountable, meaning they can be taken apart and reused with no impact on performance.
    Prestemon notes that the cable’s innovative architecture directly impacts real-world challenges in operating fusion reactors of the future. “In an actual commercial fusion-energy-producing facility, intense heat and radiation deep inside the reactor will require routine component replacements,” he says. “Being able to take these joints apart and put them back together is a significant step towards making fusion a cost-effective proposition.”
    The 12 VIPER cables that Hartwig’s team built, running between one and 12 meters in length, were evaluated with bending tests, thousands of sudden “on-off” mechanical cycles, multiple cryogenic thermal cycles, and dozens of quench-like events to simulate the kind of punishing conditions encountered in the magnets of a fusion device. The group successfully completed four multi-week test campaigns in four months at the SULTAN facility, a leading center for superconducting cable evaluation operated by Swiss Plasma Center, affiliated with Ecole Polytechnique Fédérale de Lausanne in Switzerland.
    “This unprecedented rate of HTS cable testing at SULTAN shows the speed that technology can be advanced by an outstanding team with the mindset to go fast, the willingness to take risks, and the resources to execute,” says Hartwig. It is a sentiment that serves as the foundation of the SPARC project.
    The SPARC team continues to improve VIPER cable and is moving on to the next project milestone in mid-2021: “We’ll be building a multi-ton model coil that will be similar to the size of a full-scale magnet for SPARC,” says Sorbom. These research activities will continue to advance the foundational magnet technologies for SPARC and enable the demonstration of net energy from fusion, a key achievement that signals fusion is a viable energy technology. “That will be a watershed moment for fusion energy,” says Hartwig.
    Funding for this research was provided by CFS. More