Rating:  Summary: Important, but less focused than the title implies Review: "The theme of this book," Martin Rees writes, "is that humanity is more at risk than at any earlier phase in its history." Natural risks such as colliding with an asteroid have not changed; they are the baseline. What is new is the power that science has given small numbers of people - possibly as few as one - to endanger the entire species. Our destiny depends increasingly on choices that we make ourselves. These are important themes that should have been developed in more detail. Unfortunately, some of this relatively short book is taken up with futurist padding separated from the main point.Rees begins with familiar threats from nuclear and biological weapons, noting Fred Ikle's view that only an oppressive police state could assure total government control over novel tools of mass destruction. Rees then turns to the implications of genetic engineering, including the creation of new forms of life that could feed off other materials in our environment. Thanks to genetic engineering, the nature of humans could begin to change within this century; human character and physique will soon be malleable. The potential threats may remind some readers of Frank Herbert's novel The White Plague, in which a lone scientist creates a spectacular method of revenge. Rees is most effective when he describes the potential implications of scientific experiments, particularly in particle physics. He notes that some experiments are designed to generate conditions more extreme than ever occur naturally. Here readers will learn about the possible human creation of black holes and strangelets. Errors and unpredictable outcomes are a growing cause for worry; calculations of risk are based on probability rather than certainty. Rees tells us that one person's act of irrationality, or even one person's error, could do us all in. That should motivate a circumspect attitude toward technical innovations that pose even a small threat of catastrophic failure, though putting effective brakes on a field of research would require international consensus. Rees speculates that the abandonment of privacy may be the minimal price for maintaining security. Rees is particularly critical of American attitudes toward science and technology. Commenting that there are some who have a tenuous hold on rationality, he states that "their numbers may grow in the US." Later in the book, he writes that in the US "bizarre beliefs seem almost part of the mainstream." The United States is hardly the only source of irrational people. Rees then turns to more conventional futurism, discussing the search for extraterrestrial life and human expansion into the solar system. He implicitly advocates that humans should establish colonies beyond the Earth to assure that the species will survive a disaster on its home planet.
There are some errors. Rees writes that the Challenger explosion took place in 1987; it actually was a year earlier. He describes Gerard O'Neill as an engineering professor; O'Neill actually was a professor of physics. Rees links the SETI at Home computer network with the SETI Institute; in fact, that program is associated with Serendip IV, a project invented by professors at the University of California at Berkeley.
Rating:  Summary: Important, but less focused than the title implies Review: "The theme of this book," Martin Rees writes, "is that humanity is more at risk than at any earlier phase in its history." Natural risks such as colliding with an asteroid have not changed; they are the baseline. What is new is the power that science has given small numbers of people - possibly as few as one - to endanger the entire species. Our destiny depends increasingly on choices that we make ourselves. These are important themes that should have been developed in more detail. Unfortunately, some of this relatively short book is taken up with futurist padding separated from the main point. Rees begins with familiar threats from nuclear and biological weapons, noting Fred Ikle's view that only an oppressive police state could assure total government control over novel tools of mass destruction. Rees then turns to the implications of genetic engineering, including the creation of new forms of life that could feed off other materials in our environment. Thanks to genetic engineering, the nature of humans could begin to change within this century; human character and physique will soon be malleable. The potential threats may remind some readers of Frank Herbert's novel The White Plague, in which a lone scientist creates a spectacular method of revenge. Rees is most effective when he describes the potential implications of scientific experiments, particularly in particle physics. He notes that some experiments are designed to generate conditions more extreme than ever occur naturally. Here readers will learn about the possible human creation of black holes and strangelets. Errors and unpredictable outcomes are a growing cause for worry; calculations of risk are based on probability rather than certainty. Rees tells us that one person's act of irrationality, or even one person's error, could do us all in. That should motivate a circumspect attitude toward technical innovations that pose even a small threat of catastrophic failure, though putting effective brakes on a field of research would require international consensus. Rees speculates that the abandonment of privacy may be the minimal price for maintaining security. Rees is particularly critical of American attitudes toward science and technology. Commenting that there are some who have a tenuous hold on rationality, he states that "their numbers may grow in the US." Later in the book, he writes that in the US "bizarre beliefs seem almost part of the mainstream." The United States is hardly the only source of irrational people. Rees then turns to more conventional futurism, discussing the search for extraterrestrial life and human expansion into the solar system. He implicitly advocates that humans should establish colonies beyond the Earth to assure that the species will survive a disaster on its home planet. There are some errors. Rees writes that the Challenger explosion took place in 1987; it actually was a year earlier. He describes Gerard O'Neill as an engineering professor; O'Neill actually was a professor of physics. Rees links the SETI at Home computer network with the SETI Institute; in fact, that program is associated with Serendip IV, a project invented by professors at the University of California at Berkeley.
Rating:  Summary: Informative But Not Persuasive Review: A good book, but not persuasive. Can we do great harm to each other with less-than-total nuclear war, certainly. But we would continue as a species. And our predecessors survived the concurrent existence of smallpox, cholera, yellow fever, tuberculosis, polio, the plague, and many other infectious agents. And as to nanotechnology, the world is a tough place at the atomic level, so delicate processes would not survive, and robust devices will probably not have the flexibility needed for an Assembler. Nanotech, if we ever master it, will be a next century or beyond technology. Our freedoms may be the unavoidable victim as we seek to enhance security in a time of individual access to dangerous technologies. This book is a great overview of impending, possible disasters, but the major premise of the book, namely that there is a great probability of our species causing our own extinction within this century is not sufficiently supported. It does end with an interesting call for us to seek human colonies "off world", though I don't think we understand the complexity of the support structure needed to have a self-sustaining colony on Mars (i.e. what happens when an essential integrated circuit manufactured in an industruial complex finally breaks).
Rating:  Summary: Things we all need to think about Review: A short but very thought-provoking book, this is not a 'doom and gloom' pessimistic view of the future, but an invitation to the reader to seriously think about humanity's long-term survival prospects. A good selection of both natural and human-caused dangers are considered here, though not in a great deal of depth.
There is a focus on space related dangers (and other space topics like interplanetary colonisation as a safeguard against disaster on Earth), which is not at all surprising given the author, and while I would have preferred to have had more coverage on other topics, it was probably a good decision by Rees to focus on those areas he knows best.
One particularly thought-provoking topic is the idea that technology is rapidly reaching a point where individuals (or very small groups) can cause catastrophic global damage, a very new phenomenon. While we generally find the idea of a society with no privacy distasteful, monitoring every individual may become necessary as the only real way to combat this danger. We all may have to seriously start considering how much privacy and freedom we wish to retain, versus how much danger we are willing to accept for the human race.
Rating:  Summary: A sobering assessment Review: An important thing to realize when reading this book is that we will indeed have a "final hour." Whether it comes through extinction or self destruction or through our becoming "posthuman" is entirely uncertain, but come it will.
I have read several other doomsday books, including A Guide to the End of the World: Everything You Never Wanted to Know (2002) by Bill McGuire, and Extinction: Evolution and the End of Man (2002) by Michael Boulter. I have also read some books by futurists like Ray Kurzweil and Pierre Baldi (The Age of Spiritual Machines: When Computers Exceed Human Intelligence [1999] and The Shattered Self: The End of Natural Evolution [2001], respectively); additionally I have read some of the books that Rees relied upon while writing this book, including, Our Posthuman Future: Consequences of the Biotechnology Revolution (2002) by Francis Fukuyama, and so most of the things that Martin Rees is worried about are familiar to me.
But this book nonetheless broadened my perspective because Sir Martin Rees (the Astronomer Royal of Great Britain, and a distinguished astrophysicist) is persuasive in his argument that there may actually be scientific experiments that should not be tried. He warns against some kinds of genetic engineering, especially those attempting to change the DNA of dangerous pathogens, and even rates some experiments in physics as of dubious value. This is a somewhat surprising stance for a reputable scientists to take since most scientists do not relish the prospect of political restraints on their work, and usually afford the same courtesy to practitioners in other disciplines.
His call for taking a close look at experiments with a chance of a "doomsday downside," however remote, is well taken. His sense that some biological experiments have such an unsavory "yuck factor" (e.g., "Brainless hominoids whose organs could be harvested as spare parts," p. 78) that scientists themselves should not be alone in deciding whether such experiments should continue, is also an excellent point.
Rees is characteristically not dogmatic about any of this. He presents the dangers and the objections typically with the proviso that a wider public than an individual scientist, or an oligarchy of scientists, should participate in the decisions made. Indeed Rees is an eminently reasonable man who tries to have as few prejudices (or "yuck factors") about things as possible.
He emphasizes the unpredictability of future developments, noting that "straightforward projections of present trends will miss the most revolutionary innovations: the qualitatively new things that really change the world." (p. 12) Nobody before modern physics could have predicted the power of the atomic bomb, nor could the earliest experimenters with electricity have foreseen how electrical power would transform the world.
Like the futurists named above, Rees sees a posthuman future for our kind, a future in which cultural evolution transforms humans into something beyond human. He recalls Darwin, who wrote, "not one living species will transmit its unaltered likeness to a distant futurity" and notes that "Earth itself may endure, but it will not be humans who cope with the scorching of our planet by the dying sun..." (p. 186) What both Darwin and Rees are acknowledging is that all species eventually become extinct, and so too will humans.
The central point of this book I believe however is to be found further down the page where Rees writes, "Nuclear weapons give an attacking nation a devastating advantage over any feasible defense. New sciences will soon empower small groups, even individuals, with similar leverage over society. Our increasingly interconnected world is vulnerable to new risks; 'bio' or 'cyber,' terror or error. These risks cannot be eliminated: indeed it will be hard to stop them from growing without encroaching on some cherished personal freedoms."
Indeed, this is perhaps the central conundrum of our time made emphatic by the events of September 11th.
One of the most interesting ideas in this book is this from page 154: "Perhaps complex aggregates of atoms, whether brains or machine, can never understand everything about themselves." I am reminded here of Godel's incompleteness theorem in which he demonstrated that mathematics cannot have a truly rigorous logical foundation. I am also reminded of Russell's discovery that the logic of self-referential systems can lead to paradox. Rees's point here is that we may never really know ourselves.
Rees also makes the point on the same page that our machines will accelerate science, perhaps to the point where only machines can understand the new discoveries.
Clearly we are finite creatures in a world that we can never hope to fully understand. Furthermore there will always be dangers that we cannot predict or avoid. These are sobering thoughts for humans to think.
Rees closes by asking if the future will "be filled with life, or as empty as the Earth's first sterile seas" and he opines that "The choice may depend on us, this century."
Here I think he is waxing perhaps a bit melodramatic since, while we may have the ability to destroy civilization here on earth, life will indeed go on since it is highly unlikely that we will develop any time soon the ability to destroy all life. Furthermore, I agree with those who believe that life in some form exists beyond our solar system. Surely we will not be able to destroy them.
Rating:  Summary: interesting but-- Review: Certainly interesting although very short and he disappears into the stars toward the end and totally loses his focus toward the end.
Rating:  Summary: Somewhat disappointing Review: I am a doom and gloom type person, so I bought this book with some eagerness. However, as pointed out in some of the other reviews, the book is disappointingly superficial in its coverage of issues and lacking in scholarship. Take, for example, the section on the dangers of nanotechnology. Michael Crichton's "Prey" does an infinitely better job of detailing what nanotechnology is all about and how it might go wrong. Similarly, if you're interested in viruses running amok, buy Preston's "Hot Zone" and "Demon in the Freezer" instead as a fascinating and gripping introduction and then tackle Laurie Garrett's "The Coming Plague" for a truly comprehensive treatment of the subject. As another example, Bill Bryson's recent book does a better job of describing threats due to possible geological disasters such as volcanoes...you get the picture. I found myself wishing at every chapter that the author had given more detail and provided more background on the threats he desribes. My bottom line? If you are also a doom and gloom person, save your money and wait for the paperback; there's enough in here to keep you mildly entertained, even if none of it is particularly new. If you're not into contemplating the destruction of the earth, skip this book entirely.
Rating:  Summary: Important, maybe even inspiring, but lacks depth Review: I have the greatest respect for Martin Rees both as a leading scientist and as a scientist who believes in making science widely accessible. My sense is that in this book, he presents so much so briefly that the most important themes remain undeveloped.
The doom-and-gloom title only tells part of the story. Rees summarizes the many threats to our civilization, the biosphere, and even to the cosmos as a whole. These risks stem from natural events such as asteroids, comets, or super-massive volcanic eruptions, but even more from human activities. Rees does a good job of reminding us that science and technology are giving individuals, whatever their motivations, access to more and more power. It won't be long before a terrorist group or a Unibomber-type individual could cause enormous destruction, for example by unleashing homemade bioweapons. Other risks come from scientists heedlessly pushing the envelope of fields such as nanotechnology. The cumulative risk, Rees argures, has never been greater, not even during the depths of the cold war.
Still, Rees provides some hope. He advocates a renewed thrust into space, with the idea of establishing self-sufficient groups of humans (or our "descendants" in the form of intelligent machines) away from Earth, where even an Earth-destroying disaster would not bring human (and posthuman) history to a crashing stop.
These are important themes, which Rees backs up by brief references to those who have gone more deeply into them than he has.
I would have felt more satisfied by Our Final Hour if Rees had taken the time to go more deeply into his most important points himself.
Robert Adler, author of Scence Firsts: From the Creation of Science to the Science of Creation; and Medical Firsts: From Hippocrates to the Human Genome
Rating:  Summary: A critical next fraction of a second Review: If we compress our solar system's entire lifecycle in a single year, the 20th century would present only a third of a second. In Martin Rees' more or less pessimistic outlook the next fraction of a second is crucial for the future of mankind. With worst case scenarios he warns for threats without enemies (cosmological catastrophes) as well as for man-made threats (environmental degradation, nuclear weapons, bio-terror, robotics or nanotechnology). He concedes that the technological future in our century is brilliant, where some mind-boggling artefacts like implants of computers in the human brain or the achievement of immortality should not be plainly dismissed. But there are darker sides at our scientific progress, making Huxley's 'Brave New World' a distinct possibility via designer drugs and genetic interventions. This book deals also with demography, cosmological travel and space emigration, the future growth of the human (or new) twig(s) and (for the author a key challenge) the search for alien life. Martin Rees did a tour-de-force by selecting and combining concisely vastly different fields in a small and easy understandable book. With excellent notes, this work, like all his other books, is a must read for all those interested in the fate of mankind. Carl Djerassi's autobiography 'The Pill, Pygmy Chimps and Degas' gives a lively picture of the political infightings in the organization of the here mentioned Pugwash conferences.
Rating:  Summary: Well reasoned argument how science might destroy the world Review: It's strange how many "the world is going to end" books cross my desk. Our Final Hour: A Scientist's Warning: How Terror, Error, and Environmental Disaster Threaten Humankind's Future In This Century--On Earth and Beyond is the latest offering is by Sir Martin Rees, England's Astronomy Royal, and delves into the possiblility that the fate of humanity, the Earth, and maybe even the entire universe is in the hands of well-intentioned (or malicious) scientists as they push the boundaries of nature. Scientists will destroy the world! We've all heard that before, but found it kind of a strange statement coming from one of the more prominent scientists in the world. In "Our Final Hour", however, Rees makes some well-reasoned arguments about the dangers of scientific exploration. Not that we shouldn't explore nature, just that we should be mindful of the risks and take extra precautions. The book is a quick read, only 228 pages, and takes us through the range of doomsday scenarios that scientists can unleash: environmental disasters that warm/cool the Earth and make it unlivable; bioterrorism that could unleash a plague of germs on the populace; and exotic physics experiments that could convert all matter in the universe into something... unpleasant. Rees is calm and reasoned in his arguments; at no point does he stray into "science is bad" rants. Instead, he adopts the tone of a scientific professional, concerned about the ethical implications of scientific discovery. But he doesn't argue that science should be slowed down, in fact, Rees believes that it's pretty much impossible to stop scientific development. For every country that has a ban on genetic research, there will be one happy to support it. And technology will allow the tools to create viruses and other nastiness by a much larger group of people - some with nasty intentions. I guess that's where the book fell down a bit for me. It offers up lots challenges the world could face from science, but it's short on solutions that could help guide policy. I got the impression that Rees feels largely pessimistic that anything can really be done to slow progress, and the inevitable disasters science could cause. It's unrealistic to tell scientists what they can and can't work on; even more difficult to enforce ethical guidelines; and probably impossible to stop technology from falling into the wrong hands. The only hope Rees sees is in human spaceflight - essentially escaping the problem and heading to the stars. That's all well and good, but the Earth is where I keep all my stuff. There's got to be more than that. I was hoping for a much longer book that offered up some deeper policy suggestions, but I suspect the implications are just too far reaching to make realistic suggestions. Still, it's an interesting read.
|