More stories

  • in

    Engineering for social impact

    A desire to make meaningful contributions to society has influenced Runako Gentles’ path in life. Gentles grew up in Jamaica with a supportive extended family that instilled in him his connection to his faith and his aspiration to aim for greatness.

    “While growing up, I was encouraged to live a life that could potentially bring about major positive changes in my family and many other people’s lives,” says the MIT junior.

    One of those pathways his parents encouraged is pursuing excellence in academics.

    Gentles attended Campion College, a Jesuit high school in Jamaica for academically high-achieving students. Gentles was valedictorian and even won an award “for the member of the valedictory class who most closely resembles the ideal of intellectual competence, openness to growth, and commitment to social justice.”

    Although he did well in all subjects, he naturally gravitated toward biology and chemistry. “There are certain subjects people just make sense of material much faster, and high school biology and chemistry were those subjects for me,” he says. His love of learning often surprised friends and classmates when he could recall science concepts and definitions years later.  

    For several years Gentles wanted to pursue the field of medicine. He remembers becoming more excited about the career of a surgeon after reading a book on the story of retired neurosurgeon Ben Carson. During his advanced studies at Campion, he attended a career event and met with a neurosurgeon who invited him and other classmates to watch a surgical procedure. Gentles had the unique learning experience to observe a spinal operation. Around that same time another learning opportunity presented itself. His biology teacher recommended he apply to a Caribbean Science Foundation initiative called Student Program for Innovation, Science, and Engineering (SPISE) to explore careers in science, technology, engineering, and math. The intensive residential summer program for Caribbean students is modeled after the Minority Introduction to Engineering and Science (MITES) program at MIT. Cardinal Warde, a professor of electrical engineering at MIT who is also from the Caribbean, serves as the faculty director for both MITES and SPISE. The program was Gentles’ first major exposure to engineering.

    “I felt like I was in my first year of college at SPISE. It was an amazing experience and it helped me realize the opportunities that an engineering career path offers,” Gentles says. He excelled in the SPISE program, even winning one of the program’s highest honors for demonstrating overall excellence and leadership.

    SPISE was profoundly impactful to Gentles and he decided to pursue engineering at MIT. While further exploring his engineering interests before his first year at MIT, he remembers reading an article that piqued his interest in industry sectors that met basic human and societal needs.

    “I started thinking more about engineering and ethics,” says Gentles. He wanted to spend his time learning how to use science and engineering to make meaningful change in society.  “I think back to wanting to be a doctor for many years to help sick people, but I took it a step further. I wanted to get closer to addressing some of the root causes of deaths, illnesses, and the poor quality of life for billions of people,” he says of his decision to pursue a degree in civil and environmental engineering.

    Gentles spent his first semester at MIT working as a remote student when the Covid pandemic shut down in-person learning. He participated in 1.097 (Introduction to Civil and Environmental Engineering Research) during the January Independent Activities Period, in which undergraduates work one-on-one with graduate students or postdoc mentors on research projects that align with their interests. Gentles worked in the lab of Ruben Juanes exploring the use of machine learning to analyze earthquake data to determine whether different geologic faults in Puerto Rico resulted in distinguishable earthquake clusters. He joined the lab of Desiree Plata in the summer of his sophomore year on another undergraduate research opportunity (UROP) project, analyzing diesel range organic compounds in water samples collected from shallow groundwater sources near hydraulic fracking sites in West Virginia. The experience even led Gentles to be a co-author in his graduate student mentor’s abstract proposal for the American Geophysical Union Fall Meeting 2022 conference.  

    Gentles says he found the Department of Civil and Environmental Engineering a place for him to have the big-picture mindset of thinking about how technology is going to affect the environment, which ultimately affects society. “Choosing this department was not just about gaining the technical knowledge that most interested me. I wanted to be in a space where I would significantly develop my mindset of using innovation to bring more harmony between society and the environment,” says Gentles.

    Outside of the classroom, learning acoustic guitar is a passion for Gentles. He plays at social events for Cru, a Christian community at MIT, where he serves as a team leader. He credits Cru with helping him feel connected to a lot of different people, even outside of MIT.

    He’s also a member of the Bernard M. Gordon-MIT Engineering Leadership Program, which helps undergraduates gain and hone leadership skills to prepare them for careers in engineering. After learning and exploring more UROPs and classes in civil and environmental engineering, he aspires to hold a position of leadership where he can use his environmental knowledge to impact human lives.

    “Mitigating environmental issues can sometimes be a very complicated endeavor involving many stakeholders,” Gentles says. “We need more bright minds to be thinking of creative ways to address these pressing problems. We need more leaders helping to make society more harmonious with our planet.” More

  • in

    Flow batteries for grid-scale energy storage

    In the coming decades, renewable energy sources such as solar and wind will increasingly dominate the conventional power grid. Because those sources only generate electricity when it’s sunny or windy, ensuring a reliable grid — one that can deliver power 24/7 — requires some means of storing electricity when supplies are abundant and delivering it later when they’re not. And because there can be hours and even days with no wind, for example, some energy storage devices must be able to store a large amount of electricity for a long time.

    A promising technology for performing that task is the flow battery, an electrochemical device that can store hundreds of megawatt-hours of energy — enough to keep thousands of homes running for many hours on a single charge. Flow batteries have the potential for long lifetimes and low costs in part due to their unusual design. In the everyday batteries used in phones and electric vehicles, the materials that store the electric charge are solid coatings on the electrodes. “A flow battery takes those solid-state charge-storage materials, dissolves them in electrolyte solutions, and then pumps the solutions through the electrodes,” says Fikile Brushett, an associate professor of chemical engineering at MIT. That design offers many benefits and poses a few challenges.

    Flow batteries: Design and operation

    A flow battery contains two substances that undergo electrochemical reactions in which electrons are transferred from one to the other. When the battery is being charged, the transfer of electrons forces the two substances into a state that’s “less energetically favorable” as it stores extra energy. (Think of a ball being pushed up to the top of a hill.) When the battery is being discharged, the transfer of electrons shifts the substances into a more energetically favorable state as the stored energy is released. (The ball is set free and allowed to roll down the hill.)

    At the core of a flow battery are two large tanks that hold liquid electrolytes, one positive and the other negative. Each electrolyte contains dissolved “active species” — atoms or molecules that will electrochemically react to release or store electrons. During charging, one species is “oxidized” (releases electrons), and the other is “reduced” (gains electrons); during discharging, they swap roles. Pumps are used to circulate the two electrolytes through separate electrodes, each made of a porous material that provides abundant surfaces on which the active species can react. A thin membrane between the adjacent electrodes keeps the two electrolytes from coming into direct contact and possibly reacting, which would release heat and waste energy that could otherwise be used on the grid.

    When the battery is being discharged, active species on the negative side oxidize, releasing electrons that flow through an external circuit to the positive side, causing the species there to be reduced. The flow of those electrons through the external circuit can power the grid. In addition to the movement of the electrons, “supporting” ions — other charged species in the electrolyte — pass through the membrane to help complete the reaction and keep the system electrically neutral.

    Once all the species have reacted and the battery is fully discharged, the system can be recharged. In that process, electricity from wind turbines, solar farms, and other generating sources drives the reverse reactions. The active species on the positive side oxidize to release electrons back through the wires to the negative side, where they rejoin their original active species. The battery is now reset and ready to send out more electricity when it’s needed. Brushett adds, “The battery can be cycled in this way over and over again for years on end.”

    Benefits and challenges

    A major advantage of this system design is that where the energy is stored (the tanks) is separated from where the electrochemical reactions occur (the so-called reactor, which includes the porous electrodes and membrane). As a result, the capacity of the battery — how much energy it can store — and its power — the rate at which it can be charged and discharged — can be adjusted separately. “If I want to have more capacity, I can just make the tanks bigger,” explains Kara Rodby PhD ’22, a former member of Brushett’s lab and now a technical analyst at Volta Energy Technologies. “And if I want to increase its power, I can increase the size of the reactor.” That flexibility makes it possible to design a flow battery to suit a particular application and to modify it if needs change in the future.

    However, the electrolyte in a flow battery can degrade with time and use. While all batteries experience electrolyte degradation, flow batteries in particular suffer from a relatively faster form of degradation called “crossover.” The membrane is designed to allow small supporting ions to pass through and block the larger active species, but in reality, it isn’t perfectly selective. Some of the active species in one tank can sneak through (or “cross over”) and mix with the electrolyte in the other tank. The two active species may then chemically react, effectively discharging the battery. Even if they don’t, some of the active species is no longer in the first tank where it belongs, so the overall capacity of the battery is lower.

    Recovering capacity lost to crossover requires some sort of remediation — for example, replacing the electrolyte in one or both tanks or finding a way to reestablish the “oxidation states” of the active species in the two tanks. (Oxidation state is a number assigned to an atom or compound to tell if it has more or fewer electrons than it has when it’s in its neutral state.) Such remediation is more easily — and therefore more cost-effectively — executed in a flow battery because all the components are more easily accessed than they are in a conventional battery.

    The state of the art: Vanadium

    A critical factor in designing flow batteries is the selected chemistry. The two electrolytes can contain different chemicals, but today the most widely used setup has vanadium in different oxidation states on the two sides. That arrangement addresses the two major challenges with flow batteries.

    First, vanadium doesn’t degrade. “If you put 100 grams of vanadium into your battery and you come back in 100 years, you should be able to recover 100 grams of that vanadium — as long as the battery doesn’t have some sort of a physical leak,” says Brushett.

    And second, if some of the vanadium in one tank flows through the membrane to the other side, there is no permanent cross-contamination of the electrolytes, only a shift in the oxidation states, which is easily remediated by re-balancing the electrolyte volumes and restoring the oxidation state via a minor charge step. Most of today’s commercial systems include a pipe connecting the two vanadium tanks that automatically transfers a certain amount of electrolyte from one tank to the other when the two get out of balance.

    However, as the grid becomes increasingly dominated by renewables, more and more flow batteries will be needed to provide long-duration storage. Demand for vanadium will grow, and that will be a problem. “Vanadium is found around the world but in dilute amounts, and extracting it is difficult,” says Rodby. “So there are limited places — mostly in Russia, China, and South Africa — where it’s produced, and the supply chain isn’t reliable.” As a result, vanadium prices are both high and extremely volatile — an impediment to the broad deployment of the vanadium flow battery.

    Beyond vanadium

    The question then becomes: If not vanadium, then what? Researchers worldwide are trying to answer that question, and many are focusing on promising chemistries using materials that are more abundant and less expensive than vanadium. But it’s not that easy, notes Rodby. While other chemistries may offer lower initial capital costs, they may be more expensive to operate over time. They may require periodic servicing to rejuvenate one or both of their electrolytes. “You may even need to replace them, so you’re essentially incurring that initial (low) capital cost again and again,” says Rodby.

    Indeed, comparing the economics of different options is difficult because “there are so many dependent variables,” says Brushett. “A flow battery is an electrochemical system, which means that there are multiple components working together in order for the device to function. Because of that, if you are trying to improve a system — performance, cost, whatever — it’s very difficult because when you touch one thing, five other things change.”

    So how can we compare these new and emerging chemistries — in a meaningful way — with today’s vanadium systems? And how do we compare them with one another, so we know which ones are more promising and what the potential pitfalls are with each one? “Addressing those questions can help us decide where to focus our research and where to invest our research and development dollars now,” says Brushett.

    Techno-economic modeling as a guide

    A good way to understand and assess the economic viability of new and emerging energy technologies is using techno-economic modeling. With certain models, one can account for the capital cost of a defined system and — based on the system’s projected performance — the operating costs over time, generating a total cost discounted over the system’s lifetime. That result allows a potential purchaser to compare options on a “levelized cost of storage” basis.

    Using that approach, Rodby developed a framework for estimating the levelized cost for flow batteries. The framework includes a dynamic physical model of the battery that tracks its performance over time, including any changes in storage capacity. The calculated operating costs therefore cover all services required over decades of operation, including the remediation steps taken in response to species degradation and crossover.

    Analyzing all possible chemistries would be impossible, so the researchers focused on certain classes. First, they narrowed the options down to those in which the active species are dissolved in water. “Aqueous systems are furthest along and are most likely to be successful commercially,” says Rodby. Next, they limited their analyses to “asymmetric” chemistries; that is, setups that use different materials in the two tanks. (As Brushett explains, vanadium is unusual in that using the same “parent” material in both tanks is rarely feasible.) Finally, they divided the possibilities into two classes: species that have a finite lifetime and species that have an infinite lifetime; that is, ones that degrade over time and ones that don’t.

    Results from their analyses aren’t clear-cut; there isn’t a particular chemistry that leads the pack. But they do provide general guidelines for choosing and pursuing the different options.

    Finite-lifetime materials

    While vanadium is a single element, the finite-lifetime materials are typically organic molecules made up of multiple elements, among them carbon. One advantage of organic molecules is that they can be synthesized in a lab and at an industrial scale, and the structure can be altered to suit a specific function. For example, the molecule can be made more soluble, so more will be present in the electrolyte and the energy density of the system will be greater; or it can be made bigger so it won’t fit through the membrane and cross to the other side. Finally, organic molecules can be made from simple, abundant, low-cost elements, potentially even waste streams from other industries.

    Despite those attractive features, there are two concerns. First, organic molecules would probably need to be made in a chemical plant, and upgrading the low-cost precursors as needed may prove to be more expensive than desired. Second, these molecules are large chemical structures that aren’t always very stable, so they’re prone to degradation. “So along with crossover, you now have a new degradation mechanism that occurs over time,” says Rodby. “Moreover, you may figure out the degradation process and how to reverse it in one type of organic molecule, but the process may be totally different in the next molecule you work on, making the discovery and development of each new chemistry require significant effort.”

    Research is ongoing, but at present, Rodby and Brushett find it challenging to make the case for the finite-lifetime chemistries, mostly based on their capital costs. Citing studies that have estimated the manufacturing costs of these materials, Rodby believes that current options cannot be made at low enough costs to be economically viable. “They’re cheaper than vanadium, but not cheap enough,” says Rodby.

    The results send an important message to researchers designing new chemistries using organic molecules: Be sure to consider operating challenges early on. Rodby and Brushett note that it’s often not until way down the “innovation pipeline” that researchers start to address practical questions concerning the long-term operation of a promising-looking system. The MIT team recommends that understanding the potential decay mechanisms and how they might be cost-effectively reversed or remediated should be an upfront design criterion.

    Infinite-lifetime species

    The infinite-lifetime species include materials that — like vanadium — are not going to decay. The most likely candidates are other metals; for example, iron or manganese. “These are commodity-scale chemicals that will certainly be low cost,” says Rodby.

    Here, the researchers found that there’s a wider “design space” of feasible options that could compete with vanadium. But there are still challenges to be addressed. While these species don’t degrade, they may trigger side reactions when used in a battery. For example, many metals catalyze the formation of hydrogen, which reduces efficiency and adds another form of capacity loss. While there are ways to deal with the hydrogen-evolution problem, a sufficiently low-cost and effective solution for high rates of this side reaction is still needed.

    In addition, crossover is a still a problem requiring remediation steps. The researchers evaluated two methods of dealing with crossover in systems combining two types of infinite-lifetime species.

    The first is the “spectator strategy.” Here, both of the tanks contain both active species. Explains Brushett, “You have the same electrolyte mixture on both sides of the battery, but only one of the species is ever working and the other is a spectator.” As a result, crossover can be remediated in similar ways to those used in the vanadium flow battery. The drawback is that half of the active material in each tank is unavailable for storing charge, so it’s wasted. “You’ve essentially doubled your electrolyte cost on a per-unit energy basis,” says Rodby.

    The second method calls for making a membrane that is perfectly selective: It must let through only the supporting ion needed to maintain the electrical balance between the two sides. However, that approach increases cell resistance, hurting system efficiency. In addition, the membrane would need to be made of a special material — say, a ceramic composite — that would be extremely expensive based on current production methods and scales. Rodby notes that work on such membranes is under way, but the cost and performance metrics are “far off from where they’d need to be to make sense.”

    Time is of the essence

    The researchers stress the urgency of the climate change threat and the need to have grid-scale, long-duration storage systems at the ready. “There are many chemistries now being looked at,” says Rodby, “but we need to hone in on some solutions that will actually be able to compete with vanadium and can be deployed soon and operated over the long term.”

    The techno-economic framework is intended to help guide that process. It can calculate the levelized cost of storage for specific designs for comparison with vanadium systems and with one another. It can identify critical gaps in knowledge related to long-term operation or remediation, thereby identifying technology development or experimental investigations that should be prioritized. And it can help determine whether the trade-off between lower upfront costs and greater operating costs makes sense in these next-generation chemistries.

    The good news, notes Rodby, is that advances achieved in research on one type of flow battery chemistry can often be applied to others. “A lot of the principles learned with vanadium can be translated to other systems,” she says. She believes that the field has advanced not only in understanding but also in the ability to design experiments that address problems common to all flow batteries, thereby helping to prepare the technology for its important role of grid-scale storage in the future.

    This research was supported by the MIT Energy Initiative. Kara Rodby PhD ’22 was supported by an ExxonMobil-MIT Energy Fellowship in 2021-22.

    This article appears in the Winter 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    An interdisciplinary approach to fighting climate change through clean energy solutions

    In early 2021, the U.S. government set an ambitious goal: to decarbonize its power grid, the system that generates and transmits electricity throughout the country, by 2035. It’s an important goal in the fight against climate change, and will require a switch from current, greenhouse-gas producing energy sources (such as coal and natural gas), to predominantly renewable ones (such as wind and solar).

    Getting the power grid to zero carbon will be a challenging undertaking, as Audun Botterud, a principal research scientist at the MIT Laboratory for Information and Decision Systems (LIDS) who has long been interested in the problem, knows well. It will require building lots of renewable energy generators and new infrastructure; designing better technology to capture, store, and carry electricity; creating the right regulatory and economic incentives; and more. Decarbonizing the grid also presents many computational challenges, which is where Botterud’s focus lies. Botterud has modeled different aspects of the grid — the mechanics of energy supply, demand, and storage, and electricity markets — where economic factors can have a huge effect on how quickly renewable solutions get adopted.

    On again, off again

    A major challenge of decarbonization is that the grid must be designed and operated to reliably meet demand. Using renewable energy sources complicates this, as wind and solar power depend on an infamously volatile system: the weather. A sunny day becomes gray and blustery, and wind turbines get a boost but solar farms go idle. This will make the grid’s energy supply variable and hard to predict. Additional resources, including batteries and backup power generators, will need to be incorporated to regulate supply. Extreme weather events, which are becoming more common with climate change, can further strain both supply and demand. Managing a renewables-driven grid will require algorithms that can minimize uncertainty in the face of constant, sometimes random fluctuations to make better predictions of supply and demand, guide how resources are added to the grid, and inform how those resources are committed and dispatched across the entire United States.

    “The problem of managing supply and demand in the grid has to happen every second throughout the year, and given how much we rely on electricity in society, we need to get this right,” Botterud says. “You cannot let the reliability drop as you increase the amount of renewables, especially because I think that will lead to resistance towards adopting renewables.”

    That is why Botterud feels fortunate to be working on the decarbonization problem at LIDS — even though a career here is not something he had originally planned. Botterud’s first experience with MIT came during his time as a graduate student in his home country of Norway, when he spent a year as a visiting student with what is now called the MIT Energy Initiative. He might never have returned, except that while at MIT, Botterud met his future wife, Bilge Yildiz. The pair both ended up working at the Argonne National Laboratory outside of Chicago, with Botterud focusing on challenges related to power systems and electricity markets. Then Yildiz got a faculty position at MIT, where she is a professor of nuclear and materials science and engineering. Botterud moved back to the Cambridge area with her and continued to work for Argonne remotely, but he also kept an eye on local opportunities. Eventually, a position at LIDS became available, and Botterud took it, while maintaining his connections to Argonne.

    “At first glance, it may not be an obvious fit,” Botterud says. “My work is very focused on a specific application, power system challenges, and LIDS tends to be more focused on fundamental methods to use across many different application areas. However, being at LIDS, my lab [the Energy Analytics Group] has access to the most recent advances in these fundamental methods, and we can apply them to power and energy problems. Other people at LIDS are working on energy too, so there is growing momentum to address these important problems.”

    Weather, space, and time

    Much of Botterud’s research involves optimization, using mathematical programming to compare alternatives and find the best solution. Common computational challenges include dealing with large geographical areas that contain regions with different weather, different types and quantities of renewable energy available, and different infrastructure and consumer needs — such as the entire United States. Another challenge is the need for granular time resolution, sometimes even down to the sub-second level, to account for changes in energy supply and demand.

    Often, Botterud’s group will use decomposition to solve such large problems piecemeal and then stitch together solutions. However, it’s also important to consider systems as a whole. For example, in a recent paper, Botterud’s lab looked at the effect of building new transmission lines as part of national decarbonization. They modeled solutions assuming coordination at the state, regional, or national level, and found that the more regions coordinate to build transmission infrastructure and distribute electricity, the less they will need to spend to reach zero carbon.

    In other projects, Botterud uses game theory approaches to study strategic interactions in electricity markets. For example, he has designed agent-based models to analyze electricity markets. These assume each actor will make strategic decisions in their own best interest and then simulate interactions between them. Interested parties can use the models to see what would happen under different conditions and market rules, which may lead companies to make different investment decisions, or governing bodies to issue different regulations and incentives. These choices can shape how quickly the grid gets decarbonized.

    Botterud is also collaborating with researchers in MIT’s chemical engineering department who are working on improving battery storage technologies. Batteries will help manage variable renewable energy supply by capturing surplus energy during periods of high generation to release during periods of insufficient generation. Botterud’s group models the sort of charge cycles that batteries are likely to experience in the power grid, so that chemical engineers in the lab can test their batteries’ abilities in more realistic scenarios. In turn, this also leads to a more realistic representation of batteries in power system optimization models.

    These are only some of the problems that Botterud works on. He enjoys the challenge of tackling a spectrum of different projects, collaborating with everyone from engineers to architects to economists. He also believes that such collaboration leads to better solutions. The problems created by climate change are myriad and complex, and solving them will require researchers to cooperate and explore.

    “In order to have a real impact on interdisciplinary problems like energy and climate,” Botterud says, “you need to get outside of your research sweet spot and broaden your approach.” More

  • in

    3 Questions: Leveraging carbon uptake to lower concrete’s carbon footprint

    To secure a more sustainable and resilient future, we must take a careful look at the life cycle impacts of humanity’s most-produced building material: concrete. Carbon uptake, the process by which cement-based products sequester carbon dioxide, is key to this understanding.

    Hessam AzariJafari, the MIT Concrete Sustainability Hub’s deputy director, is deeply invested in the study of this process and its acceleration, where prudent. Here, he describes how carbon uptake is a key lever to reach a carbon-neutral concrete industry.

    Q: What is carbon uptake in cement-based products and how can it influence their properties?

    A: Carbon uptake, or carbonation, is a natural process of permanently sequestering CO2 from the atmosphere by hardened cement-based products like concretes and mortars. Through this reaction, these products form different kinds of limes or calcium carbonates. This uptake occurs slowly but significantly during two phases of the life cycle of cement-based products: the use phase and the end-of-life phase.

    In general, carbon uptake increases the compressive strength of cement-based products as it can densify the paste. At the same time, carbon uptake can impact the corrosion resistance of concrete. In concrete that is reinforced with steel, the corrosion process can be initiated if the carbonation happens extensively (e.g., the whole of the concrete cover is carbonated) and intensively (e.g., a significant proportion of the hardened cement product is carbonated). [Concrete cover is the layer distance between the surface of reinforcement and the outer surface of the concrete.]

    Q: What are the factors that influence carbon uptake?

    A: The intensity of carbon uptake depends on four major factors: the climate, the types and properties of cement-based products used, the composition of binders (cement type) used, and the geometry and exposure condition of the structure.

    In regard to climate, the humidity and temperature affect the carbon uptake rate. In very low or very high humidity conditions, the carbon uptake process is slowed. High temperatures speed the process. The local atmosphere’s carbon dioxide concentration can affect the carbon uptake rate. For example, in urban areas, carbon uptake is an order of magnitude faster than in suburban areas.

    The types and properties of cement-based products have a large influence on the rate of carbon uptake. For example, mortar (consisting of water, cement, and fine aggregates) carbonates two to four times faster than concrete (consisting of water, cement, and coarse and fine aggregates) because of its more porous structure.The carbon uptake rate of dry-cast concrete masonry units is higher than wet-cast for the same reason. In structural concrete, the process is made slower as mechanical properties are improved and the density of the hardened products’ structure increases.

    Lastly, a structure’s surface area-to-volume ratio and exposure to air and water can have ramifications for its rate of carbonation. When cement-based products are covered, carbonation may be slowed or stopped. Concrete that is exposed to fresh air while being sheltered from rain can have a larger carbon uptake compared to cement-based products that are painted or carpeted. Additionally, cement-based elements with large surface areas, like thin concrete structures or mortar layers, allow uptake to progress more extensively.

    Q: What is the role of carbon uptake in the carbon neutrality of concrete, and how should architects and engineers account for it when designing for specific applications?

    A: Carbon uptake is a part of the life cycle of any cement-based products that should be accounted for in carbon footprint calculations. Our evaluation shows the U.S. pavement network can sequester 5.8 million metric tons of CO2, of which 52 percent will be sequestered when the demolished concrete is stockpiled at its end of life.

    From one concrete structure to another, the percentage of emissions sequestered may vary. For instance, concrete bridges tend to have a lower percentage versus buildings constructed with concrete masonry. In any case, carbon uptake can influence the life cycle environmental performance of concrete.

    At the MIT Concrete Sustainability Hub, we have developed a calculator to enable construction stakeholders to estimate the carbon uptake of concrete structures during their use and end-of-life phases.

    Looking toward the future, carbon uptake’s role in the carbon neutralization of cement-based products could grow in importance. While caution should be taken in regards to uptake when reinforcing steel is embedded in concrete, there are opportunities for different stakeholders to augment carbon uptake in different cement-based products.

    Architects can influence the shape of concrete elements to increase the surface area-to-volume ratio (e.g., making “waffle” patterns on slabs and walls, or having several thin towers instead of fewer large ones on an apartment complex). Concrete manufacturers can adjust the binder type and quantity while delivering concrete that meets performance requirements. Finally, industrial ecologists and life-cycle assessment practitioners need to work on the tools and add-ons to make sure the impact of carbon is well captured when assessing the potential impacts of cement-based products in buildings and infrastructure systems.

    Currently, the cement and concrete industry is working with tech companies as well as local, state, and federal governments to lower and subsidize the code of carbon capture sequestration and neutralization. Accelerating carbon uptake where reasonable could be an additional lever to neutralize the carbon emissions of the concrete value chain.

    Carbon uptake is one more piece of the puzzle that makes concrete a sustainable choice for building in many applications. The sustainability and resilience of the future built environment lean on the use of concrete. There is still much work to be done to truly build sustainably, and understanding carbon uptake is an important place to begin. More

  • in

    Fieldwork class examines signs of climate change in Hawaii

    When Joy Domingo-Kameenui spent two weeks in her native Hawaii as part of MIT class 1.091 (Traveling Research Environmental eXperiences), she was surprised to learn about the number of invasive and endangered species. “I knew about Hawaiian ecology from middle and high school but wasn’t fully aware to the extent of how invasive species and diseases have resulted in many of Hawaii’s endemic species becoming threatened,” says Domingo-Kameenui.  

    Domingo-Kameenui was part of a group of MIT students who conducted field research on the Big Island of Hawaii in the Traveling Research Environmental eXperiences (TREX) class offered by the Department of Civil and Environmental Engineering. The class provides undergraduates an opportunity to gain hands-on environmental fieldwork experience using Hawaii’s geology, chemistry, and biology to address two main topics of climate change concern: sulfur dioxide (SO2) emissions and forest health.

    “Hawaii is this great system for studying the effects of climate change,” says David Des Marais, the Cecil and Ida Green Career Development Professor of Civil and Environmental Engineering and lead instructor of TREX. “Historically, Hawaii has had occasional mild droughts that are related to El Niño, but the droughts are getting stronger and more frequent. And we know these types of extreme weather events are going to happen worldwide.”

    Climate change impacts on forests

    The frequency and intensity of extreme events are also becoming more of a problem for forests and plant life. Forests have a certain distribution of vegetation and as you get higher in elevation, the trees gradually turn into shrubs, and then rock. Trees don’t grow above the timberline, where the temperature and precipitation changes dramatically at the high elevations. “But unlike the Sierra Nevada or the Rockies, where the trees gradually change as you go up the mountains, in Hawaii, they gradually change, and then they just stop,” says Des Marais.

    “Why this is an interesting model for climate change,” explains Des Marais, “is that line where trees stop [growing] is going to move, and it’s going to become more unstable as the trade winds are affected by global patterns of air circulation, which are changing because of climate change.”

    The research question that Des Marais asks students to explore — How is the Hawaiian forest going to be affected by climate change? — uses Hawaii as a model for broader patterns in climate change for forests.

    To dive deeper into this question, students trekked up the mountain taking ground-level measurements of canopy cover with a camera app on their cellphones, estimating how much tree coverage blankets the sky when looking up, and observing how the canopy cover thins until they see no tree coverage at all as they go further up the mountain. Drones also flew above the forest to measure chlorophyll and how much plant matter remains. And then satellite data products from NASA and the European Space Agency were used to measure the distribution of chlorophyll, climate, and precipitation data from space.

    They also worked directly with community stakeholders at three locations around the island to access the forests and use technology to assess the ecology and biodiversity challenges. One of those stakeholders was the Kamehameha Schools Natural and Cultural Ecosystems Division, whose mission is to preserve the land and manage it in a sustainable way. Students worked with their plant biologists to help address and think about what management decisions will support the future health of their forests.

    “Across the island, rising temperatures and abnormal precipitation patterns are the main drivers of drought, which really has significant impacts on biodiversity, and overall human health,” says Ava Gillikin, a senior in civil and environmental engineering.

    Gillikin adds that “a good proportion of the island’s water system relies on rainwater catchment, exposing vulnerabilities to fluctuations in rain patterns that impact many people’s lives.”

    Deadly threats to native plants

    The other threats to Hawaii’s forests are invasive species causing ecological harm, from the prevalence of non-indigenous mosquitoes leading to increases in avian malaria and native bird death that threaten the native ecosystem, to a plant called strawberry guava.

    Strawberry guava is taking over Hawaii’s native ōhiʻa trees, which Domingo-Kameenui says is also contributing to Hawaii’s water production. “The plants absorb water quickly so there’s less water runoff for groundwater systems.”

    A fungal pathogen is also infecting native ōhiʻa trees. The disease, called rapid ʻohiʻa death (ROD), kills the tree within a few days to weeks. The pathogen was identified by researchers on the island in 2014 from the fungal genus, Ceratocystis. The fungal pathogen was likely carried into the forests by humans on their shoes, or contaminated tools, gear, and vehicles traveling from one location to another. The fungal disease is also transmitted by beetles that bore into trees and create a fine powder-like dust. This dust from an infected tree is then mixed with the fungal spores and can easily spread to other trees by wind, or contaminated soil.

    For Gillikin, seeing the effects of ROD in the field highlighted the impact improper care and preparation can have on native forests. “The ‘ohi’a tree is one of the most prominent native trees, and ROD can kill the trees very rapidly by putting a strain on its vascular system and preventing water from reaching all parts of the tree,” says Gillikin.

    Before entering the forests, students sprayed their shoes and gear with ethanol frequently to prevent the spread.

    Uncovering chemical and particle formation

    A second research project in TREX studied volcanic smog (vog) that plagues the air, making visibility problematic at times and causing a lot of health problems for people in Hawaii. The active Kilauea volcano releases SO2 into the atmosphere. When the SO2 mixes with other gasses emitted from the volcano and interacts with sunlight and the atmosphere, particulate matter forms.

    Students in the Kroll Group, led by Jesse Kroll, professor of civil and environmental engineering and chemical engineering, have been studying SO2 and particulate matter over the years, but not the chemistry directly in how those chemical transformations occur.

    “There’s a hypothesis that there is a functional connection between the SO2 and particular matter, but that’s never been directly demonstrated,” says Des Marais.

    Testing that hypothesis, the students were able to measure two different sizes of particulate matter formed from the SO2 and develop a model to show how much vog is generated downstream of the volcano.

    They spent five days at two sites from sunrise to late morning measuring particulate matter formation as the sun comes up and starts creating new particles. Using a combination of data sources for meteorology, such as UV index, wind speed, and humidity, the students built a model that demonstrates all the pieces of an equation that can calculate when new particles are formed.

    “You can build what you think that equation is based on first-principle understanding of the chemical composition, but what they did was measured it in real time with measurements of the chemical reagents,” says Des Marias.

    The students measured what was going to catalyze the chemical reaction of particulate matter — for instance, things like sunlight and ozone — and then calculated numbers to the outputs.

    “What they found, and what seems to be happening, is that the chemical reagents are accumulating overnight,” says Des Marais. “Then as soon as the sun rises in the morning all the transformation happens in the atmosphere. A lot of the reagents are used up and the wind blows everything away, leaving the other side of the island with polluted air,” adds Des Marais.

    “I found the vog particle formation fieldwork a surprising research learning,” adds Domingo-Kameenui who did some atmospheric chemistry research in the Kroll Group. “I just thought particle formation happened in the air, but we found wind direction and wind speed at a certain time of the day was extremely important to particle formation. It’s not just chemistry you need to look at, but meteorology and sunlight,” she adds.

    Both Domingo-Kameenui and Gillikin found the fieldwork class an important and memorable experience with new insight that they will carry with them beyond MIT.  

    How Gillikin approaches fieldwork or any type of community engagement in another culture is what she will remember most. “When entering another country or culture, you are getting the privilege to be on their land, to learn about their history and experiences, and to connect with so many brilliant people,” says Gillikin. “Everyone we met in Hawaii had so much passion for their work, and approaching those environments with respect and openness to learn is what I experienced firsthand and will take with me throughout my career.” More

  • in

    Michael Howland gives wind energy a lift

    Michael Howland was in his office at MIT, watching real-time data from a wind farm 7,000 miles away in northwest India, when he noticed something odd: Some of the turbines weren’t producing the expected amount of electricity.

    Howland, the Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering, studies the physics of the Earth’s atmosphere and how that information can optimize renewable energy systems. To accomplish this, he and his team develop and use predictive models, supercomputer simulations, and real-life data from wind farms, such as the one in India.

    The global wind power market is one of the most cost-competitive and resilient power sources across the world, the Global Wind Energy Council reported last year. The year 2020 saw record growth in wind power capacity, thanks to a surge of installations in China and the United States. Yet wind power needs to grow three times faster in the coming decade to address the worst impacts of climate change and achieve federal and state climate goals, the report says.

    “Optimal wind farm design and the resulting cost of energy are dependent on the wind,” Howland says. “But wind farms are often sited and designed based on short-term historical climate records.”

    In October 2021, Howland received a Seed Fund grant from the MIT Energy Initiative (MITEI) to account for how climate change might affect the wind of the future. “Our initial results suggest that considering the uncertainty in the winds in the design and operation of wind farms can lead to more reliable energy production,” he says.

    Most recently, Howland and his team came up with a model that predicts the power produced by each individual turbine based on the physics of the wind farm as a whole. The model can inform decisions that may boost a farm’s overall output.

    The state of the planet

    Growing up in a suburb of Philadelphia, the son of neuroscientists, Howland’s childhood wasn’t especially outdoorsy. Later, he’d become an avid hiker with a deep appreciation for nature, but a ninth-grade class assignment made him think about the state of the planet, perhaps for the first time.

    A history teacher had asked the class to write a report on climate change. “I remember arguing with my high school classmates about whether humans were the leading cause of climate change, but the teacher didn’t want to get into that debate,” Howland recalls. “He said climate change was happening, whether or not you accept that it’s anthropogenic, and he wanted us to think about the impacts of global warming, and solutions. I was one of his vigorous defenders.”

    As part of a research internship after his first year of college, Howland visited a wind farm in Iowa, where wind produces more than half of the state’s electricity. “The turbines look tall from the highway, but when you’re underneath them, you’re really struck by their scale,” he says. “That’s where you get a sense of how colossal they really are.” (Not a fan of heights, Howland opted not to climb the turbine’s internal ladder to snap a photo from the top.)

    After receiving an undergraduate degree from Johns Hopkins University and master’s and PhD degrees in mechanical engineering from Stanford University, he joined MIT’s Department of Civil and Environmental Engineering to focus on the intersection of fluid mechanics, weather, climate, and energy modeling. His goal is to enhance renewable energy systems.

    An added bonus to being at MIT is the opportunity to inspire the next generation, much like his ninth-grade history teacher did for him. Howland’s graduate-level introduction to the atmospheric boundary layer is geared primarily to engineers and physicists, but as he sees it, climate change is such a multidisciplinary and complex challenge that “every skill set that exists in human society can be relevant to mitigating it.”

    “There are the physics and engineering questions that our lab primarily works on, but there are also questions related to social sciences, public acceptance, policymaking, and implementation,” he says. “Careers in renewable energy are rapidly growing. There are far more job openings than we can hire for right now. In many areas, we don’t yet have enough people to address the challenges in renewable energy and climate change mitigation that need to be solved.

    “I encourage my students — really, everyone I interact with — to find a way to impact the climate change problem,” he says.

    Unusual conditions

    In fall 2021, Howland was trying to explain the odd data coming in from India.

    Based on sensor feedback, wind turbines’ software-driven control systems constantly tweak the speed and the angle of the blades, and what’s known as yaw — the orientation of the giant blades in relation to the wind direction.

    Existing utility-scale turbines are controlled “greedily,” which means that every turbine in the farm automatically turns into the wind to maximize its own power production.

    Though the turbines in the front row of the Indian wind farm were reacting appropriately to the wind direction, their power output was all over the place. “Not what we would expect based on the existing models,” Howland says.

    These massive turbine towers stood at 100 meters, about the length of a football field, with blades the length of an Olympic swimming pool. At their highest point, the blade tips lunged almost 200 meters into the sky.

    Then there’s the speed of the blades themselves: The tips move many times faster than the wind, around 80 to 100 meters per second — up to a quarter or a third of the speed of sound.

    Using a state-of-the-art sensor that measures the speed of incoming wind before it interacts with the massive rotors, Howland’s team saw an unexpectedly complex airflow effect. He covers the phenomenon in his class. The data coming in from India, he says, displayed “quite remarkable wind conditions stemming from the effects of Earth’s rotation and the physics of buoyancy 
that you don’t always see.”

    Traditionally, wind turbines operate in the lowest 10 percent of the atmospheric boundary layer — the so-called surface layer — which is affected primarily by ground conditions. The Indian turbines, Howland realized, were operating in regions of the atmosphere that turbines haven’t historically accessed.

    Trending taller

    Howland knew that airflow interactions can persist for kilometers. The interaction of high winds with the front-row turbines was generating wakes in the air similar to the way boats generate wakes in the water.

    To address this, Howland’s model trades off the efficiency of upwind turbines to benefit downwind ones. By misaligning some of the upwind turbines in certain conditions, the downwind units experience less wake turbulence, increasing the overall energy output of the wind farm by as much as 1 percent to 3 percent, without requiring additional costs. If a 1.2 percent energy increase was applied to the world’s existing wind farms, it would be the equivalent of adding more than 3,600 new wind turbines — enough to power about 3 million homes.

    Even a modest boost could mean fewer turbines generating the same output, or the ability to place more units into a smaller space, because negative interactions between the turbines can be diminished.

    Howland says the model can predict potential benefits in a variety of scenarios at different types of wind farms. “The part that’s important and exciting is that it’s not just particular to this wind farm. We can apply the collective control method across the wind farm fleet,” he says, which is growing taller and wider.

    By 2035, the average hub height for offshore turbines in the United States is projected to grow from 100 meters to around 150 meters — the height of the Washington Monument.

    “As we continue to build larger wind turbines and larger wind farms, we need to revisit the existing practice for their design and control,” Howland says. “We can use our predictive models to ensure that we build and operate the most efficient renewable generators possible.”

    Looking to the future

    Howland and other climate watchers have reason for optimism with the passage in August 2022 of the Inflation Reduction Act, which calls for a significant investment in domestic energy production and for reducing carbon emissions by roughly 40 percent by 2030.

    But Howland says the act itself isn’t sufficient. “We need to continue pushing the envelope in research and development as well as deployment,” he says. The model he created with his team can help, especially for offshore wind farms experiencing low wind turbulence and larger wake interactions.

    Offshore wind can face challenges of public acceptance. Howland believes that researchers, policymakers, and the energy industry need to do more to get the public on board by addressing concerns through open public dialogue, outreach, and education.

    Howland once wrote and illustrated a children’s book, inspired by Dr. Seuss’s “The Lorax,” that focused on renewable energy. Howland recalls his “really terrible illustrations,” but he believes he was onto something. “I was having some fun helping people interact with alternative energy in a more natural way at an earlier age,” he says, “and recognize that these are not nefarious technologies, but remarkable feats of human ingenuity.” More

  • in

    Helping the cause of environmental resilience

    Haruko Wainwright, the Norman C. Rasmussen Career Development Professor in Nuclear Science and Engineering (NSE) and assistant professor in civil and environmental engineering at MIT, grew up in rural Japan, where many nuclear facilities are located. She remembers worrying about the facilities as a child. Wainwright was only 6 at the time of the Chernobyl accident in 1986, but still recollects it vividly.

    Those early memories have contributed to Wainwright’s determination to research how technologies can mold environmental resilience — the capability of mitigating the consequences of accidents and recovering from contamination.

    Wainwright believes that environmental monitoring can help improve resilience. She co-leads the U.S. Department of Energy (DOE)’s Advanced Long-term Environmental Monitoring Systems (ALTEMIS) project, which integrates technologies such as in situ sensors, geophysics, remote sensing, simulations, and artificial intelligence to establish new paradigms for monitoring. The project focuses on soil and groundwater contamination at more than 100 U.S. sites that were used for nuclear weapons production.

    As part of this research, which was featured last year in Environmental Science & Technology Journal, Wainwright is working on a machine learning framework for improving environmental monitoring strategies. She hopes the ALTEMIS project will enable the rapid detection of anomalies while ensuring the stability of residual contamination and waste disposal facilities.

    Childhood in rural Japan

    Even as a child, Wainwright was interested in physics, history, and a variety of other subjects.

    But growing up in a rural area was not ideal for someone interested in STEM. There were no engineers or scientists in the community and no science museums, either. “It was not so cool to be interested in science, and I never talked about my interest with anyone,” Wainwright recalls.

    Television and books were the only door to the world of science. “I did not study English until middle school and I had never been on a plane until college. I sometimes find it miraculous that I am now working in the U.S. and teaching at MIT,” she says.

    As she grew a little older, Wainwright heard a lot of discussions about nuclear facilities in the region and many stories about Hiroshima and Nagasaki.

    At the same time, giants like Marie Curie inspired her to pursue science. Nuclear physics was particularly fascinating. “At some point during high school, I started wondering ‘what are radiations, what is radioactivity, what is light,’” she recalls. Reading Richard Feynman’s books and trying to understand quantum mechanics made her want to study physics in college.

    Pursuing research in the United States

    Wainwright pursued an undergraduate degree in engineering physics at Kyoto University. After two research internships in the United States, Wainwright was impressed by the dynamic and fast-paced research environment in the country.

    And compared to Japan, there were “more women in science and engineering,” Wainwright says. She enrolled at the University of California at Berkeley in 2005, where she completed her doctorate in nuclear engineering with minors in statistics and civil and environmental engineering.

    Before moving to MIT NSE in 2022, Wainwright was a staff scientist in the Earth and Environmental Area at Lawrence Berkeley National Laboratory (LBNL). She worked on a variety of topics, including radioactive contamination, climate science, CO2 sequestration, precision agriculture, and watershed science. Her time at LBNL helped Wainwright build a solid foundation about a variety of environmental sensors and monitoring and simulation methods across different earth science disciplines.   

    Empowering communities through monitoring

    One of the most compelling takeaways from Wainwright’s early research: People trust actual measurements and data as facts, even though they are skeptical about models and predictions. “I talked with many people living in Fukushima prefecture. Many of them have dosimeters and measure radiation levels on their own. They might not trust the government, but they trust their own data and are then convinced that it is safe to live there and to eat local food,” Wainwright says.

    She has been impressed that area citizens have gained significant knowledge about radiation and radioactivity through these efforts. “But they are often frustrated that people living far away, in cities like Tokyo, still avoid agricultural products from Fukushima,” Wainwright says.

    Wainwright thinks that data derived from environmental monitoring — through proper visualization and communication — can address misconceptions and fake news that often hurt people near contaminated sites.

    Wainwright is now interested in how these technologies — tested with real data at contaminated sites — can be proactively used for existing and future nuclear facilities “before contamination happens,” as she explored for Nuclear News. “I don’t think it is a good idea to simply dismiss someone’s concern as irrational. Showing credible data has been much more effective to provide assurance. Or a proper monitoring network would enable us to minimize contamination or support emergency responses when accidents happen,” she says.

    Educating communities and students

    Part of empowering communities involves improving their ability to process science-based information. “Potentially hazardous facilities always end up in rural regions; minorities’ concerns are often ignored. The problem is that these regions don’t produce so many scientists or policymakers; they don’t have a voice,” Wainwright says, “I am determined to dedicate my time to improve STEM education in rural regions and to increase the voice in these regions.”

    In a project funded by DOE, she collaborates with the team of researchers at the University of Alaska — the Alaska Center for Energy and Power and Teaching Through Technology program — aiming to improve STEM education for rural and indigenous communities. “Alaska is an important place for energy transition and environmental justice,” Wainwright says. Micro-nuclear reactors can potentially improve the life of rural communities who bear the brunt of the high cost of fuel and transportation. However, there is a distrust of nuclear technologies, stemming from past nuclear weapon testing. At the same time, Alaska has vast metal mining resources for renewable energy and batteries. And there are concerns about environmental contamination from mining and various sources. The teams’ vision is much broader, she points out. “The focus is on broader environmental monitoring technologies and relevant STEM education, addressing general water and air qualities,” Wainwright says.

    The issues also weave into the courses Wainwright teaches at MIT. “I think it is important for engineering students to be aware of environmental justice related to energy waste and mining as well as past contamination events and their recovery,” she says. “It is not OK just to send waste to, or develop mines in, rural regions, which could be a special place for some people. We need to make sure that these developments will not harm the environment and health of local communities.” Wainwright also hopes that this knowledge will ultimately encourage students to think creatively about engineering designs that minimize waste or recycle material.

    The last question of the final quiz of one of her recent courses was: Assume that you store high-level radioactive waste in your “backyard.” What technical strategies would make you and your family feel safe? “All students thought about this question seriously and many suggested excellent points, including those addressing environmental monitoring,” Wainwright says, “that made me hopeful about the future.” More

  • in

    Tackling counterfeit seeds with “unclonable” labels

    Average crop yields in Africa are consistently far below those expected, and one significant reason is the prevalence of counterfeit seeds whose germination rates are far lower than those of the genuine ones. The World Bank estimates that as much as half of all seeds sold in some African countries are fake, which could help to account for crop production that is far below potential.

    There have been many attempts to prevent this counterfeiting through tracking labels, but none have proved effective; among other issues, such labels have been vulnerable to hacking because of the deterministic nature of their encoding systems. But now, a team of MIT researchers has come up with a kind of tiny, biodegradable tag that can be applied directly to the seeds themselves, and that provides a unique randomly created code that cannot be duplicated.

    The new system, which uses minuscule dots of silk-based material, each containing a unique combination of different chemical signatures, is described today in the journal Science Advances in a paper by MIT’s dean of engineering Anantha Chandrakasan, professor of civil and environmental engineering Benedetto Marelli, postdoc Hui Sun, and graduate student Saurav Maji.

    The problem of counterfeiting is an enormous one globally, the researchers point out, affecting everything from drugs to luxury goods, and many different systems have been developed to try to combat this. But there has been less attention to the problem in the area of agriculture, even though the consequences can be severe. In sub-Saharan Africa, for example, the World Bank estimates that counterfeit seeds are a significant factor in crop yields that average less than one-fifth of the potential for maize, and less than one-third for rice.

    Marelli explains that a key to the new system is creating a randomly-produced physical object whose exact composition is virtually impossible to duplicate. The labels they create “leverage randomness and uncertainty in the process of application, to generate unique signature features that can be read, and that cannot be replicated,” he says.

    What they’re dealing with, Sun adds, “is the very old job of trying, basically, not to get your stuff stolen. And you can try as much as you can, but eventually somebody is always smart enough to figure out how to do it, so nothing is really unbreakable. But the idea is, it’s almost impossible, if not impossible, to replicate it, or it takes so much effort that it’s not worth it anymore.”

    The idea of an “unclonable” code was originally developed as a way of protecting the authenticity of computer chips, explains Chandrakasan, who is the Vannevar Bush Professor of Electrical Engineering and Computer Science. “In integrated circuits, individual transistors have slightly different properties coined device variations,” he explains, “and you could then use that variability and combine that variability with higher-level circuits to create a unique ID for the device. And once you have that, then you can use that unique ID as a part of a security protocol. Something like transistor variability is hard to replicate from device to device, so that’s what gives it its uniqueness, versus storing a particular fixed ID.” The concept is based on what are known as physically unclonable functions, or PUFs.

    The team decided to try to apply that PUF principle to the problem of fake seeds, and the use of silk proteins was a natural choice because the material is not only harmless to the environment but also classified by the Food and Drug Administration in the “generally recognized as safe” category, so it requires no special approval for use on food products.

    “You could coat it on top of seeds,” Maji says, “and if you synthesize silk in a certain way, it will also have natural random variations. So that’s the idea, that every seed or every bag could have a unique signature.”

    Developing effective secure system solutions has long been one of Chandrakasan’s specialties, while Marelli has spent many years developing systems for applying silk coatings to a variety of fruits, vegetables, and seeds, so their collaboration was a natural for developing such a silk-based coding system toward enhanced security.

    “The challenge was what type of form factor to give to silk,” Sun says, “so that it can be fabricated very easily.” They developed a simple drop-casting approach that produces tags that are less than one-tenth of an inch in diameter. The second challenge was to develop “a way where we can read the uniqueness, in also a very high throughput and easy way.”

    For the unique silk-based codes, Marelli says, “eventually we found a way to add a color to these microparticles so that they assemble in random structures.” The resulting unique patterns can be read out not only by a spectrograph or a portable microscope, but even by an ordinary cellphone camera with a macro lens. This image can be processed locally to generate the PUF code and then sent to the cloud and compared with a secure database to ensure the authenticity of the product. “It’s random so that people cannot easily replicate it,” says Sun. “People cannot predict it without measuring it.”

    And the number of possible permutations that could result from the way they mix four basic types of colored silk nanoparticles is astronomical. “We were able to show that with a minimal amount of silk, we were able to generate 128 random bits of security,” Maji says. “So this gives rise to 2 to the power 128 possible combinations, which is extremely difficult to crack given the computational capabilities of the state-of-the-art computing systems.”

    Marelli says that “for us, it’s a good test bed in order to think out-of-the-box, and how we can have a path that somehow is more democratic.” In this case, that means “something that you can literally read with your phone, and you can fabricate by simply drop casting a solution, without using any advanced manufacturing technique, without going in a clean room.”

    Some additional work will be needed to make this a practical commercial product, Chandrakasan says. “There will have to be a development for at-scale reading” via smartphones. “So, that’s clearly a future opportunity.” But the principle now shows a clear path to the day when “a farmer could at least, maybe not every seed, but could maybe take some random seeds in a particular batch and verify them,” he says.

    The research was partially supported by the U.S. Office of Naval research and the National Science Foundation, Analog Devices Inc., an EECS Mathworks fellowship, and a Paul M. Cook Career Development Professorship. More