Advanced science, Astrophysics, Life as it is, Religious, Technical

Everything from Nothing

Can everything we see on earth and the planets, the stars, galaxies, supernovae and so forth come from nothing, from absolute vacuum, from empty space and can even empty space prop up from nowhere? This sort of query, some might say, is an absurd baseless query; while others might say it is a profound scientific inquiry, beyond the pigeon-holed mode of thinking.   

Philosophers and theologians of all persuasions tried to convince us that everything we see in the universe is the divine creation. But we must set off with certain fundamental assumptions – we have to accept the existence of an all-powerful, omnipresent, omniscient entity called God or Yahweh or Allah and we cannot question his origin, his present whereabouts or his mode of creation etc. Based on these premises, the revelations, directives etc as stated in the ‘Book’ should be followed as ordered by the creator!   

But science is unwilling to accept this premise without any evidence or verification. That is why there is a conflict between science and religion. As Richard Dawkins, Emeritus Fellow of New College, Oxford and Evolutionary Biologist said, “I am against religion because it teaches us to be satisfied with but not understanding the world”.

Science had moved away from accepting the divine proclamation that human beings are at the centre of creation of the creator, Earth is at the centre of the universe and the Sun goes around the Earth! Scientific discoveries have proved many of these proclamations, if not all, are blatantly wrong.

Science explored material objects on Earth – day-to-day objects to their physical and chemical composition, physical objects to molecules to atoms and sub-atomic particles. On the smallest scale, quantum mechanics explored the origin of matter and anti-matter and on the mind-boggling expansive scale of the universe the general theory of relativity explored the stars, galaxies, black holes, warm holes, universe and even multi verse.

The theologians would burst out in fury if someone, be it scientist or a science writer, tries to give the scientific explanation of something or everything coming from nothing. They would throw out their anger, what is then the omnipresent omniscience divine power called God or Yahweh or Allah doing? Is He not the undisputed Creator of everything in this universe? For centuries the religions had been proclaiming and propagating this message relentlessly. Now any attempt to explain it otherwise, on the basis of scientific ideas and theories, would be branded as heretics and atheism.

Nonetheless science has progressed enough to give a rational explanation to the creation of everything from nothing. But, first, we must understand the scientific meaning of the term ‘nothing’. In everyday language, nothing means the absence of anything. If we consider a volume of space say, 20cm by 20cm by 20cm, in front of our eyes, we may say there is nothing in there as there is no book, no pencil, no string, no fruit or anything else in that small volume and so, we may consider, there is nothing. But then, we must recognise that there are millions of air particles of various types in that volume that we cannot see but we breath all the time. So, there are things where we perceive to have nothing.

Let us take an air-tight glass case where obviously there are air particles along with air pollutants, allergens etc. Now if we pump out these particles very carefully and make it an ultra-high vacuum, can we say that there is nothing in the glass case? No, we cannot say that there is nothing in the glass case and that is because the modern physics shows us otherwise.

Quantum fluctuations in an absolute vacuum

The two branches of modern physics – the general theory of relativity and quantum mechanics – give us a description of physical processes which are mind-boggling, counter-intuitive and occasionally plainly weird. Even Einstein, who singlehandedly produced the general theory of relativity and pioneered quantum physics, had extreme difficulty in absorbing the full implications and interplay of these two theories.

Einstein produced the mass-energy equivalence, which is: E=mc2; a very elegant and at the same time extremely important equation. What it means is that the mass of an object such as an atom or a molecule or a large number of molecules in a ball or an apple or a pencil and so forth has an equivalent energy and conversely an amount of energy has an equivalent mass. It is not theoretical physicists’ crazy idea, it had been found in practice in particle physics experiments, in radioactive decay and in nuclear reactors. A certain amount of energy suddenly disappears and a very small particle called electron and its anti-particle called the positron appear. The electron is what we use to generate electricity and is used to run a television, radio, mobile phone etc and in our everyday parlance, it is a matter. On the other hand, positron is an anti-matter. When this matter (electron) and anti-matter (positron) come in contact, they annihilate each other and an amount of energy is produced which is exactly equal to what disappeared in the first place to produce this electron and positron pair.

Alongside this mass-energy equivalence, one may consider quantum physics’ uncertainty principle produced by Werner Heisenberg. We must remember that quantum mechanics deals with very small particles such as electrons, positrons, atoms and sub-atomic particles. The basic tenet of this principle is that we cannot simultaneously measure certain pairs of observables such as energy and time or position and momentum of a particle with absolute accuracy. The degree of inaccuracy or uncertainty of the pair of observables (ΔE.Δt or Δp.Δx is always higher than a quantity called Planck constant (h/2π). In other words, if we measure the energy of a quantum particle very precisely, then there would be an inherent uncertainty in time at which the energy measurement had been made and the product of these two uncertainties is going to be higher than the Planck constant, h/2π. This uncertainty principle is the bedrock of quantum mechanics. It had been proven time and time again that this uncertainty principle is inviolable and holds true in all quantum events. Heisenberg received Nobel Prize in Physics in 1932 for his contribution to Quantum Mechanics.

In the sub-world of quantum mechanics, there may be a situation which is known as quantum fluctuation. In an otherwise complete vacuum (having nothing), a quantum fluctuation can produce an amount of energy and that energy can generate a virtual electron-positron pair in the system. Now that energy comes from the nature, as if the nature is lending that energy to the system. When the electron-positron pair comes in contact with each other and they do it in a flash, both of them disappear instantly, and an amount energy is produced (equal to the energy that produced the pair in the first place) and that energy is returned to the nature and everything is squared up.

This borrowing of energy from nature, electron-positron pair formation (or for that matter matter-antimatter formation) and annihilation and then returning the energy to the nature are taking place all the time everywhere, even in a vacuum where we consider there is absolutely nothing. These are the quantum fluctuations. These are not mad professor’s or mad scientist’s utter gibberish, these are actual physical phenomena which have been demonstrated in high-energy physics laboratories. If one measures the charge of an electron with high precision, one can find a sudden fluctuation in the charge of the electron or a slight wobble in the electron trajectory. This is due to interaction of the real electron and the momentary appearance of the electron-positron pair.  

Billions and trillions of matter-antimatter particles are being generated and annihilated all the time in space. Now a situation may arise when a small fraction of these particles is not annihilated instantaneously and these matter, anti-matter particles move away from each other. In fact, it had been estimated that approximately one in a billion of such pairs had escaped annihilation and moved away to lead separate lives at the time of Big Bang. Electrons and other matters (atoms) in our everyday world (called fermions) came out and formed our world or the present universe, and the positrons and other anti-matter particles formed the anti-matter world somewhere far away from matter world, or they may have formed a separate anti-matter universe.

Our matter universe and the anti-matter universe are blood enemies. Should they come in contact, they will kill each other instantly and an unimaginable release of energy will take place. However, this energy is what these matter universe and anti-matter universe owe to the nature, because this energy was borrowed at the time of forming matter and anti-matter particles in a gigantic scale. Whereas all the other particles returned their energies to the nature, these particles, statistically one in a billion particles, escaped repayment and formed the universe.

The Big Bang from quantum fluctuations

This is how the universe, as perceived now, came into existence. It is the formation of universe out of nothing and the likely disappearance of the universe to nothing. There is no need to invent a divine power and then lay everything at the feet of that invented divine power. In fact, such an invention, all within the confines of our minds, would create more insurmountable problems in explaining things as they stand – such as where is the divine power now, how did he create these things, did he create the universe on a whim or did he have an ultimate purpose etc?

Albert Einstein was deeply sceptical about the divine power. He expressed his thought quite bluntly in saying, “I want to know how God created this world, I am not interested in this or that phenomenon, in the spectrum of this or that element. I want to know His thoughts; the rest are details”.  

It must be stated that the present perception of creation of the universe is not a done deal. The debate about the universe, its progression, its ultimate fate etc are all raging in the scientific community. This is the credit for science – science never claims to have achieved the ultimate truth; anything that is held to be true now can be changed in the light of new evidence, new facts. This is in stark contrast with religion where everything is claimed to have come from God or Allah and hence not subject to any alteration or modification. This is what science rejects.

  • Dr A Rahman is an author and a columnist.

Advanced science, Environmental, Human Rights, International, Life as it is, Political, Religious, Technical

Are we heading towards genetic disaster?

Lives on earth in various forms and shapes have come about through very complex and convoluted processes. Single cell organisms like amoeba to multi-cellular organisms like plants and animals have progressed through millions of years of slow and painstaking developments of trials and errors, alterations, modifications and so forth, which are collectively called the evolutionary process. Eventually, when an organism emerges in some viable form, it is not the end of the process, it is only the beginning. It will go on for further refinement to a better, fitter form of life. It may, nonetheless, take a wrong evolutionary step and suffer the wrath of nature and be extinct. For every surviving form of life, there are hundreds of similar lives that had either failed to develop properly and gone extinct.

Life, particularly human life, comes into existence in a tortuous way. When a male sperm cell fertilises a female egg cell, the combined single cell, called the zygote, is formed. The sperm cell and the egg cell are the reproductive organ cells – each containing 23 chromosomes – when they combine, they make up a fully developed cell containing 46 chromosomes. It may be noted that not all sperm cells fertilise egg cells. What triggers this fertilisation is still a mystery; it may just be a draw of the luck. However, this single cell zygote keeps dividing by a process called cell division, as it moves along the Fallopian tube towards the uterus. The zygote contains all the genetic instructions inherited from the father and the mother. When it reaches the uterus in three to five days, the zygote becomes what is called a blastocyst or a ball of cells. The blastocyst consists of two parts: an outer cell mass that becomes part of placenta and an inner cell mass that becomes the human body.

A cell is the basic functional unit of life. A cell is surrounded by a cell membrane and within the membrane lies a blob of transparent dilute fluid, cytoplasm and within the cytoplasm lies the cell nucleus. The nucleus of human beings contains 46 chromosomes in 23 pairs. A chromosome consists of very long DNA helix on which thousands of genes are embedded. It was anticipated that the secrets of life are all hidden within these DNA molecules. The discovery of this secret is a fascinating story.

In the early 1940s, the Austrian physicist Erwin Schrodinger, one of the pioneers of quantum physics, wrote a very thoughtful science classics book called ‘What is Life?’. He maintained that there was no divine power or mysterious spirit that needed to animate life. He speculated that life force must come from within the body, probably embedded within the molecules of the body. Inspired by Schrodinger’s book, physicist Francis Crick teamed up with geneticist James Watson and Maurice Wilkins to study molecular biology and discovered the structure of the DNA molecule within the cell. They proposed that the DNA molecule has a double helix structure and the interlink between the base pairs of the double helix contains the codes necessary for life. For their discovery, the trio won Nobel Prize for medicine in 1962. The segments of the DNA molecule with specific instructions for particular actions are called the genes.

Thus, the cells with all the internal complexities and functions constitute the smallest unit of life. There are multitudes of cell types, but the basic structure is the same. The blastocyst formed out of zygotes contains embryonic stem cells. It is estimated that humans contain 40 trillion (40,000 billion) cells in a fully developed body.

The embryonic stem cells are extremely important as they contain all the genetic information of an individual, unmodified and unaltered. These embryonic stem cells are pluripotent stem cells, meaning they can divide into more stem cells or can change into any type of tissue cell like the blood cell, liver cell, skin cell, brain cell etc. Because of this ability and its unaltered state, embryonic stem cells are highly prized for medical research. But there are downsides too; the embryo has to be sacrificed to extract these cells and that raises serious ethical objections. 

When embryonic stem cells mature, they become tissue-specific somatic cells tasked to produce body tissues and there are more than 200 types of tissue-cells in the body. Each of these cells contains the full genetic code, no matter where it finds itself, although all instructions to divide and grow are suppressed, except for this particular tissue. For example, blood cells are only responsible for generating blood, liver cells for liver, skin cells for skin etc, although each one has the full blue print for life. There are also non-embryonic stem cells, namely adult stem cells or induced pluripotent stem cells.

Medical research is going ahead using stem cells to cure humans from ailments like strokes, repair heart muscles following heart attack, cure neurological problems like Alzheimer’s disease, Parkinson’s disease etc. Stem cells can also be used to produce insulin to cure people with diabetes.

Stem cells can also be used to regenerate or repair organs such as nose, ear, lungs etc or limbs such as arms, legs etc. This aspect promises to have tremendous beneficial effects on soldiers who have lost their organs or limbs in battle fields. They can have their limbs repaired genetically or even have them newly grown in the laboratory. These are not pie in the sky aspirations. Already some organs such as hearts or lungs have been developed in the laboratories, but not in situ in primates or humans.   

With such wide-ranging medical benefits against incurable and debilitating diseases and ailments, why then Western Countries are putting restrictions on the use of stem cells and particularly embryonic stem cells in medical research? It is due to the fact that from the cure of these diseases, it is a small step to modify human genome in such a way that artificially super humans can be produced. In other words, super human Frankensteins can be produced with all the attributes one desires. Thus, uncontrolled medical research can lead to Eugenics or make it a distinct possibility.

Before and during the second world war, Hitler and his Nazi party seriously considered developing a super Euro-Aryan race where people would not only be physically strong and intellectually superior, but also free from all genetic diseases. It may, however, be noted that this idea of Eugenics was not the original Nazi invention, it was imitated from a Californian company who had been working on it for quite a few years prior to 1930s.

When the cloned humans with edited and vetted genes are produced, what would be the fate of normal human beings born traditionally with male and female fertilisation with normal genetic make-up? Eugenics proposed that all those people who were deemed by the State to be racially inferior such as Jews, gypsies etc, as well as handicapped, genetically abnormal people etc were to be exterminated to make way for the superior human race! That Eugenics died with Hitler was a great blessing for human race.

Stem cell research with the specific purpose for curing diseases like diabetes, cancer, genetic disorders, neurological diseases like Parkinson’s, Alzheimer’s etc is the beneficial aspects. But this can go a little further and pave the way to dehumanize humans or even destroy humanity. It is a double-edged sword – use it carefully, otherwise risk it to destroy you.

One thing that this genetic manipulation has done or almost at the brink of doing is to make human immortality a reality. Although sheep, cattle etc. have been successfully cloned, but primates and human beings have not yet been cloned. It is primarily because the research and in-situ testing of cloning on humans are banned in almost everywhere in the world. But if that ban is removed, the technology can be developed in a short period of time. An adult stem cell is removed from a human being, its DNA is extracted and the cell is inserted in an egg cell and let it develop in the normal way and then a clone copy of the donor human being is going to come out! Of course, it is not as easy as said, but the technology is almost there to achieve it.

The implication of human cloning is enormous. A very rich man (or woman) at nearer the end of his (or her) life may decide to live on for ever. Of course, he himself cannot live for ever; as he will age, his body functions will deteriorate and his body will gradually decay. But what he can do is to donate his cells, particularly stem cells for future fertilisation. His stem cells may be deep frozen and, as per his instructions, they may be fertilised at the desired time and a human being will come out of the cloning process. That particular (rich) man is thus reborn; one can say he is reincarnated. That rich man can also in his Will transfer his wealth to the child (yet to be born) and when the cloned child is born, he is as wealthy as his predecessor. The boy will have all the body functions, body characteristics etc of the donor, but not his memory nor the characteristics derived from the memory. In other words, he will have a blank brain slate. He will have to learn everything afresh, go to schools, play games and develop his individuality, but with the exact replica of the body of the donor. Thus, this man can replicate himself over and over again and live for ever.

We are now at the threshold of genetic revivalism, for good or for bad. Gone are those days when we had to blindly believe in fictitious divine power creating life on earth (through Adam and Eve) and submit to religious edicts without any question! In reality, life evolved from the single cell amoeba to multi-cellular organism. Now science and technology have progressed sufficiently enough to create and recreate lives with any genetic make-up. But if we allow artificial genetic creation take over the natural evolutionary process, it would be a disaster of unparalleled proportions. We must resist that temptation at all costs.

  • Dr A Rahman is an author and a columnist

Advanced science, Astrophysics, Cultural, Environmental, Life as it is, Religious, Technical

Entropy and the arrow of time

Greek philosophers some millennia ago and since then many philosophers over the centuries round the world had been raising the deep-rooted perennial questions: what is life, where was its beginning and where is its end, what makes life continue and many more intractable questions like these. These are perennial questions of profound significance, which had so far been answered in many divergent ways – in pure incomprehensible philosophical terms, in supernatural religious terms and so forth.

However, scientifically inclined people, who used to be branded centuries ago as natural philosophers, would pose the same questions in somewhat different terms: how did life begin, when is the beginning of life, how did it evolve, what is the nature of time and what is the flow of time etc? Again, these questions are not easy to answer, but at least scientists have structured and sequenced the questions so that answers become easier.

Natural philosophy evolved from pure philosophical inquiry and inquisitiveness. Scientific disciplines were considered effectively the extension of wider philosophical queries. That is why even today the highest academic degrees, both scientific and non-scientific, are titled as Doctor of Philosophy (PhD). Physical sciences are the ones which describe physical processes of natural sciences in numerical and quantitative terms.  

Heat, temperature, enthalpy, entropy, energy etc are quantities within the subject matter of thermodynamics and statistical mechanics. These subject matters along with Newtonian physics, electricity and magnetism, optics etc were bundled together as the ‘classical physics’. This naming of ‘classical physics’ does not mean that these subjects have become ‘classical’ – sort of outdated and outmoded – and there is nothing more to learn from these subjects; far from it. It only means that these traditional subjects have been set aside in order to concentrate on newer disciplines (roughly from the beginning of 20th century) like the general theory of relativity, quantum mechanics, particle physics, cosmology etc. which are called the ‘modern physics.’

This traditional segregation of branches of physics into classical physics and modern physics is purely arbitrary. There is no boundary line, no demarcation either in terms of time or disciplines between classical and modern physics. Entropy, the parameter which was invented in the 19th century as a thermodynamic quantity, has profound implications in the concept of space-time continuum and the big-bang theory of modern physics!

Entropy measuring disorder and the arrow of time.

First of all, we need to understand what heat is before we can go to understanding entropy. In olden days – 17th century or earlier – people used to visualise heat as some sort of fluid called ‘caloric’. In fact, this caloric is composed of two parts – hot and cold parts. A body is hot because it has more hot fluid and less cold fluid. On the other hand, a body is cold because it has more cold fluid than hot fluid. When hot and cold bodies come in contact with each other, hot fluid moves from the hot to the cold body and thereby rendering the cold body somewhat hotter! Nonetheless, those scientists did manage to identify a very important parameter called ‘temperature’ that measures the body’s ‘hotness’ or ‘coldness’.  

In reality, heat is the thermal energy which arises due to vibration, oscillation or physical motion of atoms and molecules that make up the body. When a body at a higher temperature comes in contact with another body at lower temperature, the excess vibrational energies of the atoms and molecules are transferred to the body at lower energy. It is the temperature that dictates the direction of flow of heat.

Let us now consider what entropy is. Entropy is a thermodynamic quantity that is the ratio of amount of heat energy that flows from one body (hot) to another body (cold) at a certain (absolute) temperature. As the probability of energy flowing from higher energy to the lower energy is much higher than the other way around, it has always been found heat flows from a hotter body to a colder body and entropy is assigned to be positive in that situation. Should heat flow from a colder body to a hotter body – its probability being very low indeed -, entropy could theoretically be negative. But in nature heat never flows from colder to hotter body and entropy is never negative. The very nature of heat (arising from motions of atoms and molecules) being transferred from hot to cold bodies, entropy is a measure of disorder in the composite system. As disorder increases, so does entropy.

It may be pointed out that when heat is shared between the bodies, it does not matter the relative sizes of these bodies. For example, A hot tea spoon dipped in a bucket of water would have some amount of heat transferred from the spoon to the water, although the total energy of the bucket of water may be much higher than that of the spoon. As stated above, it is the temperature which dictates the flow of heat and thereby the increase in entropy.

This increase in entropy or the degree of disorder is intricately linked to the flow of time or in physics terminology, the arrow of time. As neither time nor entropy does flow in reverse, they are always moving in the forward direction. From our previous example, the heat from the spoon is transferred to the bucket of water as time passes and that is the arrow of time. A situation can hardly be visualised (although theoretically possible with infinitesimally low probability) when heat flows in reverse, that is, the dipped spoon would recover heat from the bucket and become hot again!

From the time of big-bang, the entropy had been going up i.e. the degree of disorder had been spreading. That is quite natural as heat flows from one hotter part of the universe to another colder part of the universe and that means entropy is always increasing.

With the advancement of biological sciences, it had been speculated that a time will come when human beings will live for a very long time and may even become immortal. Living longer with better medical care is already happening. People on the average now live almost double the age of what they used to live about a couple of centuries ago. But being immortal means humans will not age in time and that implies that the past, present and future will all merge into one – no change in age, no change in body functions or flow of nutrients from one part of the body to another! It is a continuation of the same thing over and over again. In other words, human beings will live in suspended animation – neither alive nor dead – as energy flow will stagnate to zero entropy and there is no arrow of time. If that is what is meant by immortality, then probably that can be achieved. But, in reality, human beings, or for that matter, any form of life can never be immortal in true sense of the term. A body can live for a long period of time and gradually decay, but can never last forever.

– Dr A Rahman is an author and a columnist

Advanced science, Economic, Environmental, International, Life as it is

Blue energy: Can it power a sustainable future?

Statkraft osmotic power prototype is the world’s first osmotic power plant

Ever since global warming became a hot button issue, our leaders have told us umpteen times that “climate change is the greatest environmental threat and the biggest challenge humanity has ever faced.” Yet, they are not “bold enough to do enough” to pull us out of the climate change conundrum soon enough.

In the meantime, impacts of climate change are being felt in communities across the world. Average global temperatures have risen every decade since the 1970s, and the 10 warmest years on record have all occurred since 1997. If the trend continues unchecked, very soon we will be living on a planet with unbearable heat, unbreathable air, inundated coastal areas, widespread drought and wilder weather. Indeed, an Australian think tank warns that climate change could bring about the end of civilisation, as we know it, within three decades.

So, what should we do to tackle the disastrous effects of climate change? Since human activity is responsible for climate change, human activity can also mitigate it. To that end, we have to force our national governments to stop using the suicidal fossil fuels without any further delay. In other words, we need a carbon negative economy, or at the least, a zero-carbon economy.

We already have the potential to produce everything we need with no or very little greenhouse gas emissions. It is “green” energy solar, wind, hydropower, geothermal, nuclear that provides an alternative, sustainable and cleaner source of energy. Promising new green technologies, such as tidal, wave and ocean’s thermal energy, are also on the horizon.

There is a third type of energy many of us are not familiar with—another alternative, sustainable source of energy that could be the next frontier in clean-energy technology. It is energy released during controlled mixing of a stream of saltwater and a stream of less saline water and can, therefore, be found in abundance anywhere a river meets the sea. Since energy at the river-sea nexus is produced in naturally occurring waterbodies, which are blue, it is called “blue” energy.

Blue energy exploits the phenomenon of osmosis, which is the spontaneous movement of molecules of a solvent through a semi-permeable membrane from the side of lower concentration into the side of higher concentration until the concentration becomes equal on both sides. In the process, energy is released which could be used to generate electricity. That is why it is also called “osmotic power,” or “salinity gradient power”.

The energy output would depend on the salinity and temperature difference between the river and seawater and properties of the specific membrane. The greater the salinity difference, more energy would be produced. In fact, based on average ocean salinity and global river discharges, it has been estimated that if blue energy plants were to be built at all river estuaries, they could produce about 1,370 terawatts of power each year, according to the Norway Center for Renewable Energy (a tera is a trillion.)

The concept of blue energy is not new. It was first proposed in 1954 by a British engineer named RE Pattle, although it was not possible to implement his idea for power generation until the 1970s, when a practical method of harnessing it was outlined.

The first osmotic power plant was built in 2009 in Tofte, Norway. It produced only four kilowatts of power, which was not enough to offset the cost of construction, operation and maintenance. Consequently, it was shut down in 2013.

Since then, improved technologies to tap blue energy have been developed at various laboratories, primarily in the Netherlands and Norway. Using these technologies and the difference in salt concentration in the surface water on each side of the Afsluitdijk dam, the Dutch built a power plant in 2014 generating enough electricity to meet the energy requirements of about 500,000 homes.

Blue energy is not limited to mixing of river and seawater because osmosis works with any concentration difference of dissolved substances. It may thus be possible to generate electricity from dissolved carbon dioxide, which could be captured from fossil-fuel power plants. Researchers believe that worldwide, the flue gases of fossil fuel power plants contain enough carbon dioxide to make around 850 terawatts of blue power. Hard to believe that the villain of climate change could be part of the solution after all.

In a paper published in July 2019 in ACS Omega, one of the journals of the American Chemical Society, researchers of Stanford University claim to have made a battery that runs on electricity generated by harvesting blue energy from wastewater effluent from the Palo Alto Regional Water Quality Control Plant and seawater collected from Half Moon Bay. Their work clearly demonstrates that blue energy could make coastal wastewater treatment plants energy-independent and carbon neutral.

An advantage of blue energy technology is that it does not depend on external factors like wind or sun. Another advantage is that a commercial plant would be modest in size, but still produce a significant amount of energy. Moreover, compared with, for instance, wind and solar energy, implementing a blue energy power plant would have a smaller impact on landscape, and it requires less land usage. Besides, once fully developed and deployed, the technology would be able to generate energy continuously and would not emit greenhouse gases. Hence, it would ensure access to affordable, reliable, sustainable and clean energy for all.

There are some drawbacks of blue energy though. Power plants exploiting blue energy may have an effect on the marine life, hydrological systems and water management rules of the region. The main drawback, however, is the cost. Compared to a conventional power plant using fossil fuels, the cost of construction of a blue energy power plant would be several times higher because artificial membrane is very difficult and expensive to make. Nevertheless, once built, the expectation is that blue energy would succeed in generating power at a much cheaper rate than solar and wind.

Finally, blue energy is potentially one of the best sustainable energy resources we have at our disposal. The raw material is free and inexhaustible. “Blue” could be the “green” of the future. And the blue-green combination can match the urgency of the climate change crisis.

Quamrul Haider is a professor of physics at Fordham University, New York.

Advanced science, Environmental, International, Technical

Solar radiation management can help combat climate change

In the Environmental Physics course that I teach from time to time, a student once remarked that we really do not have to worry about the deleterious effects of climate change because technology would be able to solve all the problems we are facing. At that time, I thought this viewpoint is an extreme case of technological optimism. But today, as the likelihood of international consensus to stabilise atmospheric composition of greenhouse gases seems remote while the consequences of climate change are becoming more apparent and direr, many in the scientific community believe that the potential last-ditch effort to stave off the disastrous impacts of climate change is to appeal to technology, geoengineering in particular. Even the United Nation’s Intergovernmental Panel on Climate Change considers geoengineering as a necessary Plan B if global warming does not show any signs of slowing.

Geoengineering is deliberate, large-scale manipulation of the Earth’s environment to counteract anthropogenic climate change. It encompasses two different approaches using a variety of cutting-edge technologies to undo the effects of greenhouse gas emissions. They are removal and sequestration of carbon dioxide to lower its concentration in the atmosphere and offsetting global warming by targeting the overall amount of solar energy reaching the Earth. The removal technologies were discussed in an op-ed piece published in this newspaper on November 29, 2018.

Some of the offsetting options scientists are exploring are reflecting part of the sunlight back into space before it reaches the Earth’s surface, allowing more of the heat trapped by the Earth’s surface to escape into space, and increasing the reflectivity of roofs, Arctic ice, glaciers, pavements, croplands and deserts. Known as Solar Radiation Management (SRM), these options would slow down the rise in Earth’s temperature until carbon dioxide emissions can be reduced enough to prevent catastrophic repercussions of human-driven climate change.

The fraction of incoming sunlight that is reflected back to space could readily be changed by increasing the reflectivity of the low-level clouds. This could be achieved by spraying seawater in the air where they would evaporate to form sea salt, which would seed the clouds above the oceans making them thicker and more reflective. Several simulations have confirmed that the seeding mechanism, also known as Marine Cloud Brightening, would work with the likelihood to lower temperatures at a regional level.

Another proposed cloud-based approach involves thinning the high-altitude Cirrus clouds in the stratosphere by injecting ice nuclei into regions where the clouds are formed. These wispy clouds do not reflect much solar radiation back into space, and instead trap heat in the atmosphere by absorbing thermal radiation emitted by the Earth. While this method is not technically an example of SRM, thinning Cirrus clouds would provide more pathways for the trapped heat to escape into space, and thus, potentially cool the Earth. Currently, work in this field is limited to theoretical studies at research institutions. However, research shows that a cooling of about one degree Celsius is possible by thinning the clouds globally.

Scientists have known for a long time that volcanic eruptions could alter a planet’s climate for months on end, as millions of sunlight-reflecting minute particles (aerosols) are spread throughout the atmosphere. Indeed, the “cold and miserable” summer of 1816 in China, Europe and North America is attributed to the enormous eruption of the Indonesian volcano Tambora in 1815. Though the aerosol haze produced by the Tambora eruption reflected less than one percent of sunlight, it was enough to drop global temperatures by as much as two degrees by the summer of 1816.

The 1991 explosion of Mount Pinatubo in the Philippines cooled the Earth by about 0.5 degrees, while the average global temperatures were as much as one degree cooler for the next five years after the 1883 eruption of Krakatoa in Indonesia. Furthermore, the volcanic-induced cooling of the oceans caused by Krakatoa’s eruption was enough to offset rise in the ocean temperature and sea level for a few decades.

Inspired by these eruptions and the subsequent cooling effect of their sunlight-blocking plume of sulphate particles, scientists are suggesting injecting sulphate aerosols or hydrogen sulphide in the stratosphere. The geoengineering research programme at Harvard University is currently trying to model how clouds of such particles would behave.

One of the more practical SRM techniques that can be implemented easily is whitening surfaces like roofs, croplands and pavements to reflect more sunlight back into space. By absorbing less sunlight, they would negate some of the warming effect from greenhouse gas emissions. This is what greenhouse owners do with whitewash and blinds.

The small island of Bermuda in the North Atlantic is leading the way with white roof houses that not only reflect sunlight, but also keep the homes cooler during the hotter months. A study at the Lawrence Berkeley National Laboratory in California indicates that a 1,000 square foot of white rooftop has about the same one-time impact on global warming as reducing ten tons of carbon dioxide emissions.

Ice sheets are responsible for reflecting lots of sunlight into space. So less ice in the Arctic due to melting means less heat leaving the planet. Hence, scientists want to spread tiny glass beads around the Arctic in the hopes of making the polar ice more reflective and less prone to melting. Another idea is to cover deserts and glaciers with reflective sheets.

Perhaps the most challenging concept to control solar radiation entails deploying an array of reflecting mirrors at strategic points between the Sun and the Earth—just as we all do with sunscreens and sunblocks. Calculations by space scientists at the Lawrence Livermore National Laboratory in California indicate that a mirror roughly the size of Greenland would be able to block one to two percent of solar radiation from reaching the Earth. The idea of a sunscreen is still on the drawing board.

Finally, as we transition into a new era in which human activity is shaping the Earth more than the natural forces, technology could be seen as a way of humans reshaping the planet by limiting the adverse effects of climate change. Also, because international political efforts to curtail greenhouse gas emissions have been slow in coming, solar radiation management is a possible measure to be used if climate change trends become disruptive enough to warrant extreme and risky measures.

Quamrul Haider is a professor of physics at Fordham University, New York.