THE JOHN SNOW INTERNATIONAL SOCIETY
Contact: www.halat.com (use Contact Form)
For professionals only
are welcome to join
THE JOHN SNOW INTERNATIONAL SOCIETY
our cooperation will be appreciated.
Dr John Snow (1813 - 1858) was renown for showing that cholera was a water-borne illness. In September 1854, during the last great cholera epidemic in Great Britain, 500 people—all from the same section of London, England—died of the disease within a ten-day period. Bacteria were still unknown. People were panicking. Dr Snow ordered the removal of a pump handle, and thus ended a cholera epidemic that plagued the area served by that particular pump.
Actually, the concept had first been proposed by him in 1849. He demonstrated by case-control methods that persons who drank the well water had a much higher death rate from cholera than those who did not. Furthermore, a bottle of water from the pump carried to a widow living in an area where no cholera existed infected both her and her daughter. Snow also showed in a retrospective cohort study of households receiving water from one of two competitive companies that the cholera death rate was much higher in those drinking water from a source on the Thames River polluted by sewage, 315 deaths per 10,000 houses served by the Southwark Vauxhall water company, than those drinking water derived from above this point, namely 37 deaths per 10,000 houses of the Lambeth water company. On the basis of those observations, Snow recommended specific preventative practices, such as taking the handle off the pump and introducing a purer water supply. These successful measures were carried out some 40 years before the true bacterial cause of cholera was discovered.
Cause of Cholera
On February 3, 1884, German physician and bacteriologist Robert Koch discovered cholera vibrio, a comma-shaped bacteria that causes Asiatic cholera, an often-deadly disease. Head of a commission investigating cholera's causes, Koch traveled to Egypt in 1883 where an epidemic raged. He and his colleagues found that a particular bacterium existed in the intestines of those who died of cholera. But they failed to isolate the microorganism and couldn't infect animals with the disease. Later that year Koch went to India, where he wrote that he successfully isolated "a little bent [bacilli], like a comma." He discovered that the cholera bacteria thrived in damp, dirtied linen and moist earth and also in the stools of patients\suffering from the disease's advanced stages. Prominent Canadian physician Sir William Osler once referred to Koch as "our medical Galileo."
From mid 19th century to our times brings us the following article
Twenty years ago, 1,000 people died in an epidemic that spread across Spain. Poisoned cooking oil was blamed - an explanation that suited government and giant chemical corporations. It was, argues Bob Woffinden, who investigated the scandal in the 80s, the prototype scientific fraud that has found echoes around the world
In the days before "do-gooders" became a term of disapproval, doctors and scientists were held in absolute public esteem. They did the most good; they were working altruistically to benefit the human race - to cure illness, prevent disease and create a safer, healthier future for us all. That, at least, used to be the popular perception. Plainly, the image has become more than a little tarnished. Even before the scandal of the deaths of babies at Bristol Royal Infirmary, enough instances of incompetence and negligence had emerged to provoke widespread public scepticism about the professions. These days, scientists are more likely to find themselves occupying the lower rungs on the ladder of public trust, alongside estate agents and, well, journalists.
It is also increasingly understood that scientific research is now hardly ever conducted in a spirit of disinterested inquiry. Usually, it is funded by global companies whose concerns are anything but disinterested. Even when research is financed by government agencies, those, too, will want to call the tune. According to a survey carried out last year by the scientific body, the Institute of Professionals, Managers and Specialists, one in three scientists working for government quangos or newly privatised laboratories has been asked to adjust conclusions to suit the sponsor.
Leaving aside the implication of this for forensic science, it is evident that most scientific inquiry today is dictated not by the thirst for knowledge but by the thirst for profits. Even so, the full extent of the betrayal of the public interest has yet to be appreciated. Internationally, the scientific community has been responsible for serious errors, which have then been covered up with devastating consequences for public health. There was no proper treatment available for victims, as their condition was undiagnosed; and the same mistakes were repeated elsewhere.
Twenty years ago, the Spanish "cooking oil" disaster began as a mystery illness. Years later, the toll was put at more than 1,000 deaths and more than 25,000 seriously injured, many of whom were permanently disabled. It was the most devastating food poisoning in modern European history.
The disaster is historically important not just because of its scale and the number of victims. It was the prototype contemporary scientific fraud. It marked the first time that multinational interests successfully contrived a major cover-up in international science. For the one thing that is certain about the Spanish "cooking oil" disaster is that it had nothing to do with cooking oil.
The epidemic is officially deemed to have started on May 1 1981, when an eight-year-old boy, Jaime Vaquero Garcia, suddenly fell ill and died in his mother's arms on the way to La Paz children's hospital in Madrid. Learning that his five brothers and sisters were also ill, doctors had them all brought in and put one of the girls into intensive care. The other four children were transferred to the Hospital del Rey, Madrid's prestigious clinic for infectious diseases, where doctors began treating them for "atypical pneumonia".
When the director, Dr Antonio Muro y Fernandez-Cavada, arrived at work the following morning, he was alarmed to be told that these new patients were being treated for pneumonia. He gave his staff a dressing-down; it was out of the question medically for six members of a family to be suffering the same symptoms of pneumonia at the same time.
The Vaquero family proved merely the first of many. It seemed to be mainly women and children who were affected. The initial symptoms were flu-like: fever and breathing difficulties, vomiting and nausea, although patients soon developed a pulmonary oedema (the build-up of fluid in the lungs), skin rashes and muscle pain. The epidemic was national news.
After a few days, Muro told the media that he believed it was due to food poisoning, adding that the foodstuff was marketed "via an alternative route". He was certain of this because the casualties were all coming from the apartment blocks of the communities and towns surrounding the capital; almost no one from Madrid itself appeared to be affected.
Muro brought together relatives of those afflicted with the mystery illness and told them to work out exactly what the victims may have eaten that they, the unaffected family members, may not have eaten. In half an hour, they had an answer: salads.
On May 12, Dr Angel Peralta, the head of the endocrinology department at La Paz hospital, pointed out in a newspaper article that the symptoms of the illness were best explained by "poisoning by organo-phosphates". The following day, he received a telephone call from the health ministry, ordering him to say nothing about the epidemic, and certainly nothing about organo-phosphorous poisoning.
That same day, Muro invited health ministry officials to the Hospital del Rey. He produced maps of the localities, showing where the patients lived. He believed that the contaminated foodstuff was being sold at the local weekly street markets, the mercadillos, which set up in different towns on different days. On this basis, he predicted where the illness would strike next. He was proved right, but this was scant consolation for the fact that he was suddenly informed that he was relieved of his duties as hospital director, with immediate effect. His dismissal at least enabled him to carry out his own first-hand investigations. He patrolled the mercadillos and noticed the popularity and cheapness of large, unlabelled plastic containers of cooking oil. Immediately, he and his colleagues, one of whom was Dr Vicente Granero More, went to the houses of affected families and removed the containers of oil that they had been using when they fell ill. They carefully labelled them, sent samples of each to the government's main laboratory at Majadahonda, just outside Madrid, and awaited the results.
Most medical personnel were simply trying to tend the sick and dying - a difficult enough task in optimum conditions, but one made almost impossible because doctors, not knowing the cause of the illness, had no idea how to treat patients. Further, as the illness reached its chronic stage, the symptoms became more severe, and included weight loss, myalgia, alopecia (hair loss), muscle atrophy and limb deformity.
At all administrative levels, there was bewilderment and anxiety. Spain was then still a fledgling democracy; the dictator, General Franco, had died as recently as 1975. In February 1981, only three months prior to the outbreak, a lieutenant-colonel, Antonio Tejero, had held MPs in the cortes (parliament) at gunpoint in a botched attempt to restore army rule. More than a month after the epidemic first struck, most of those in power had no strategy other than to hope something would turn up. Finally, it did. Dr Juan Tabuenca Oliver, director of the Hospital Infantil de Ni?o Jesus, told the government that he'd found the cause of the epidemic. He'd asked 210 of the children in his care, and they'd all consumed cooking oil.
A fter, it seems, some initial hesitation, the government accepted his theory. On June 10, an official announcement was made on late-night television, informing the public that the epidemic was caused by contaminated cooking oil. Almost immediately, the panic subsided. The hospitals remained full of victims, but new admissions dropped sharply. The situation seemed, at least, under control.
Yet the government's announcement had been watched with stunned disbelief by Muro and his colleagues. Only the previous day, on June 9, they had obtained the results of the tests on their own, precise oil samples. These showed that, although none was the pure olive oil that the vendor had no doubt claimed it to be, almost all the oils had different constituents. Such a variety of oils obviously could not account for one specific illness.
The cooking oil theory was superficially persuasive. To protect its native olive oil industry, the Spanish government tried to prevent imports of the much cheaper rapeseed oil, then being put to widespread use throughout the European Community (which Spain did not join until 1986). Imports of rapeseed oil were allowed only for industrial use; the oil first had to be made inedible through the addition of aniline.
Streetwise entrepreneurs simply imported the cheaper oil anyway. The more scrupulous among them then removed the aniline; the others didn't bother. The illness was therefore attributed to aniline poisoning. It became colloquially known as la colza (which is Spanish for "rapeseed"). A number of the more high-profile oil merchants were arrested.
Three weeks after the television announcement, the health ministry allowed families to hand in their supposedly contaminated oil and replaced it with pure olive oil. This belated exchange programme was hopelessly mishandled, with few authentic records kept of who was exchanging what or (and this should have been the key point) whether the oil came from affected or unaffected households. As olive oil was guaranteed in return, many people simply handed in any oil they could find, even motor oil. Most of the oil that supposedly caused the epidemic was never available for subsequent scientific analysis. The instinctive reaction of most families, upon hearing that it was to blame for the illness, had simply been to throw it away.
In order to demonstrate that the oil had caused the illness, government scientists needed to be able to show, for example, that families who had bought the oil were affected, whereas those who hadn't were not; that the aniline in the oil was indeed poisonous and that the victims were suffering from aniline poisoning; and, bearing in mind that such commercial cooking oil fraud had been widespread for years, just what had changed in the manufacturing process to cause the oil suddenly to become so poisonous. To this day, none of these basic conditions has been met.
In 1983, however, an international conference was convened in Madrid under the auspices of the World Health Organisation (WHO). Despite the reservations of many scientists present, the epidemic was then officially named toxic oil syndrome (TOS). In 1985, the opinion of the internationally respected British epidemiologist Sir Richard Doll was sought. He was cautious, saying, "If it could be shown that even one person who developed the disease could not have had exposure to [the oil], that would provide good grounds for exculpating the oil altogether."
The trial of the oil merchants began in March 1987. Four months later, Doll, just before giving his evidence, announced that, on the basis of fresh epidemiological reports given to him, he now believed that the oil was the cause of the outbreak.
At the end of the two-year trial in 1989, the judges themselves stressed that the toxin in the oil was "still unknown". This somewhat fundamental difficulty did not prevent them from handing down long prison terms to the oil merchants, who were convicted, in effect, of causing the epidemic.
After years of one-track media reports, the notion of the "cooking oil" epidemic was firmly lodged in the public consciousness. It was unquestioned fact. No one doubted the official scientific conclusions, especially as they were accepted by the WHO.
After the 1983 Madrid conference, when there was still widespread unease with the oil theory, the Spanish government recruited some of the country's leading epidemiologists to head a fresh commission of inquiry. Among those chosen were Dr Javier Martinez Ruiz and Dr Maria Clavera Ortiz, a husband-and-wife team from Barcelona. "We absolutely believed the oil was to blame," they said. "We thought the only problem was that the information was disorganised and the research inadequate."
So they set about a rigorous examination of the official information. The results shocked them. Martinez looked at the pattern of admissions to hospitals and realised that the epidemic had peaked at the end of May. The incidence curve went down at least 10 days before the government's June 10 broadcast, and about a month before the withdrawal of the oil. In fact, the announcement that oil was to blame had had no effect on the course of the epidemic.
Meanwhile, his wife had examined the patterns of distribution of the suspect oil, which had come across the border from France. She realised that vast quantities of the oil were sold in regions (notably Catalonia) where there had not been a single case of illness. And they subsequently learned that the government was already fully aware of this. At the time of the epidemic, the government had created a new post of secretary of state for consumer affairs at cabinet level. Chosen for this appointment was a rising lawyer and economist, Enrique Martinez de Genique.
Genique himself had drawn up maps of the distribution of the oil and the pattern of illness. He realised that there was no correlation between the two and, accordingly, that the oil was not the cause of the epidemic. After presenting his findings to the health ministry, he was sacked from his government post, and soon decided to retire from politics altogether. He emphasised that he had never regretted what he did: "I had very grave doubts [about the government's stance on the epidemic] and I was morally and ethically obliged to voice them."
Martinez and Clavera, too, were fired. As this did not entirely prevent the possibility of the commission reaching inconvenient conclusions, it was soon closed down altogether.
The powerful, indeed irrefutable, evidence that the suspect oil was sold throughout parts of Spain where not a single case of illness resulted could be coupled with equally clear evidence of the converse: of people who could not have been exposed to the oil falling victim to the epidemic.
While making a television documentary, I saw many families who had suffered illness yet were adamant that they had never purchased the oil. One woman used only supplies from the olive groves of her relatives in Andalusia, yet she was seriously disabled by the illness. Perhaps the best authenticated example was the case of Maria Concepcion Navarro, a young lawyer in Madrid who fell ill, became progressively worse and died in August 1982. Her symptoms were exactly the same as those of other fatalities of la colza and she was put on the official roll of TOS victims - despite the fact that her husband, also a lawyer, stressed that they had only ever used the most reputable cooking oils. Then the authorities belatedly noticed another significant contradiction. Maria Concepcion had actually been hospitalised from November 1979, 18 months before the start of la colza. She didn't fit the official theory; consequently, her name was struck off the list of victims.
On a broader scale, this was how the statistics of the epidemic were compiled. If victims - afectados - or their families agreed that they had used the oil, their names were added to the official list; if they asserted that they had never had the oil, their names were excluded. However, the health ministry had made it known that only those whose names appeared on the official list would qualify for government compensation, so there was a clear incentive for afectados to say that they had used the oil. Developments like this artificially buttressed the government's position and made it almost impossible to produce an accurate assessment of the epidemic.
T hrough all the obfuscation, one man had simply ignored the official lines of inquiry and spent months pursuing his own. Having eliminated the cooking oil, Muro and his colleagues turned their attention to other salad products. Speaking to market stallholders, lorry drivers and around 4,000-5,000 affected families, they concluded that, without any doubt, the contaminated foodstuff was tomatoes, and it was the pesticides on them that were responsible for the epidemic. The organo-phosphorous chemicals would indeed cause the range of symptoms observed by clinicians.
The tomatoes, they established, had come from Almeria, in the south-east corner of Spain. Once a desert area, this was not fit for crop-growing until the discovery of underground water in the 1970s helped to transform it into an agricultural success story. Fruit and vegetables were forced into rapid growth under long tunnels of plastic sheeting. Some farmers got three, or even four, crops a year.
This agricultural boom was made possible only through the application of copious quantities of chemicals: nutrients, fertilisers and pesticides. Although exactly what happened may never be known, it is likely that one farmer had used the chemicals too liberally, or had harvested the crop too quickly after applying them. Neither would have been surprising. Some of the farmers were illiterate and would have had difficulty with the instructions for use on the containers of chemicals.
Muro had many supporters but, as the official view became more and more entrenched, so he was marginalised as the one dissident voice. In 1985, he died suddenly of a mysterious illness. His wife perceived the whole saga as an unmitigated family tragedy.
It was Muro and his team who had done the on-the-ground epidemiology in the immediate wake of the outbreak. What, then, of the epidemiology that the WHO in 1992 was boldly to describe as "comprehensive and exacting epidemiological studies, subjected to critical independent assessment"?
Muro's work was first-hand. But trying to assess the accuracy and validity of the official epidemiology was not easy. The FIS - the government agency responsible for toxic oil syndrome - refused to release details of the fieldwork carried out or any background information. However, the families described in the reports were given code numbers and these could be matched against the official list of victims which then became part of the trial documentation. Eventually we identified the families supposedly interviewed for the key epidemiological reports and went to see them.
From these first-hand inquiries, we established that there was not a single case in which the family's history corresponded with what was written in the epidemiological reports. Sometimes the differences were slight; sometimes the reports bore no relation to what had actually happened. In one sense, this was not surprising; while some families did recall having been interviewed by officials at the time, others insisted that they were never questioned at all. The principal scientific premise - that evidence should be gathered and, on that basis, a conclusion reached - appeared to have been reversed: a conclusion had been reached, and then the evidence manipulated in order to support that conclusion.
The original study on which the oil theory is founded, by Dr Juan Tabuenca Oliver, was published in the Lancet; yet it appears less than rigorous. At the outset, he claimed that all 210 of the children in his care had taken the oil. The next time that reference was made to this study, the number of children in his care was given as 60. Two years later, it had shot up again, to 345. Today, the figure is put at 62.
Moreover, his claim that all of the children had consumed the oil was disputed at the trial. Pilar Pans Gonzalez, the mother of one of his patients, was asked if her son had had the oil. She replied that he had not. Asked how she could explain this discrepancy (with the supposed 100% finding), she replied, "That is their problem, something they have invented."
There are three specific epidemiological reports on which the oil theory now rests. Two of these are particularly astonishing. The first concerns three cases of illness in two families in Seville. These three became ill, according to the official analysis, because the heads of the families worked at a refinery where some of the suspect oil had supposedly been refined, and took some for use at home.
There are a number of glaring problems with this report. Most importantly, one of the families, on hearing the government's announcement about the illness, had taken their own oil to be analysed at the local Instituto de la Grasa, which happened to be one of the country's most renowned laboratories. The records of this analysis are still available; the oil was not rapeseed at all, it was olive oil.
If the theory was correct, one might have expected other refinery workers to fall ill; no one did. However, there were originally, according to the government's own records, 83 cases of la colza in Seville. The other 80 vanished from the official records, presumably because they couldn't possibly fit the oil theory; after all, the suspect oil had never been sold in Andalusia, where authentic olive oil is in such plentiful supply.
Even more amazing was the study concerning a convent outside Madrid. According to this, 42 out of 43 nuns fell ill after using the oil, while visitors whose food was prepared in a different oil did not fall ill. From an official perspective, the beauty of this epidemiology was not just that it provided game, set and match for the oil theory, but that no one could afterwards check the veracity of the paper. This was a closed convent. The nuns had no routine contact with members of the public, and they certainly didn't talk to the media. In the event, senior nuns from the convent did give evidence at the trial. Their testimony flatly contradicted what was written in the convent report. Of course, all the food was prepared in the same way and cooked in the same oil. In fact, only very few nuns (about eight or nine) suffered any illness. The epidemiological report was a fabrication.
Nor was the oil theory underpinned by any laboratory science. In the years since the 1981 outbreak, the suspect oils have been analysed in leading laboratories throughout the world. No chemical, or contaminant, that would account for the symptoms observed in the afectados has ever been found. Aniline - which was blamed for the epidemic - is poisonous only in much greater quantities than were present in the oil and, in any case, the symptoms of aniline poisoning are quite different from those of the afectados. Laboratory tests proved that the oil was not harmful to animals. "All the animals thrived on the stuff," one researcher explained. "Their coats became glossier and they put on weight."
Dr Gaston Vettorazzi was chief toxicologist at the WHO at the time of the outbreak, but had since retired. He told us, in the most gracious way, that if even a bunch of journalists with no scientific expertise could see through all this, then it must indeed be obvious. In other words, he didn't believe that this had occurred through a series of administrative errors; he believed that the truth had been deliberately concealed by Spanish officialdom. As he said, the rapeseed explanation of the illness was "predetermined. That was the official line of the so-called Spanish science. You cannot force an investigator to follow a line. If this is done, science is dead."
For the various political and industrial concerns, there was substantial common interest in hiding the truth. For the multinational chemical companies, the revelation that a mass poisoning had occurred would have been scandalous and financially disastrous. At that stage, organo-chlorine (OC) pesticides were being phased out, to be replaced by organo-phosphates (OPs). The profits generated by the worldwide sales of OPs in the past 20 years have been vast. In those terms, suppressing the true cause of la colza was a commercial imperative. The Spanish administration had entirely congruent interests. With the attempted coup in parliament still fresh in the public mind, it was vital that ministers were seen to be in control. Democracy itself depended on the government being seen to deal capably with this national tragedy.
Moreover, at that time, Almeria represented an economic miracle for Spain, providing produce that went to all parts of Europe. Had it been frankly acknowledged that all those deaths had been caused by pesticides on tomatoes, the effect on the entire Spanish export trade would have been incalculable.
Nor was that the only economic repercussion. The news that such staple home-grown produce as tomatoes could be poisonous would have had a calamitous impact on Spain's other main generator of foreign income, the ever-growing tourist trade. On the other hand, spreading the fiction that the epidemic had been caused by cheap rapeseed oil sold in unlabelled containers at street markets to the Spanish working class in poorer areas of the country - that, of course, had no effect on tourism.
The consequences of the cover-up were appalling. Many died unnecessarily. Thousands more, children among them, were left to endure a lifetime of pain and physical impairment that perhaps could have been avoided if they had received the care and treatment they needed as early as possible. The Spanish colza is not just one of the great tragedies of the last century, it is also one of the great scandals.
Years later, in 1989, a similar mystery illness was first diagnosed in New Mexico. Victims, 29 of whom died, fell ill with pneumonia-like symptoms. Altogether, there were about 1,500 cases across the US. The symptoms appeared identical to those suffered by the afectados in Spain; yet no one in the US had had access to contaminated cooking oil.
It is virtually certain that this outbreak, too, was caused by OP pesticides. The scientific community - helpfully for their paymasters - did not conclude that; the cause of the illness was attributed to an innocuous amino acid supplement, L-Tryptophan, which had been taken without problem by millions of Americans throughout the 1980s. (Its sale is now banned in the US and Europe.) Just as with toxic oil syndrome, funding was available for scientists who wished to pursue the official line, but not for those who held different views. Nevertheless, no component of L-Tryptophan has ever been found that would account for the symptoms suffered by victims.
There have now been several issues about which there is a general perception that the truth is not being allowed to surface. These include, most obviously, the effects of OPs on farmers in Britain. Despite what appears to be a mounting toll of death and debilitating illness inflicted on the farming community, all official inquiries somehow fail to establish a link between pesticide exposure and the illness.
The WHO, to its shame, continues to refer to the Spanish epidemic as the "toxic oil syndrome". Every day around the world, students are no doubt being taught that "cooking oil" was the cause of the disaster. Two books on the cover-up have lately been published. One of these, Detras de la colza, is by Granero, Muro's right-hand man; the other, published in France, is Jacques Philipponneau's Relation de l'empoisonnement perpétré en Espagne et camouflé sous le nom de syndrome de l'huile toxique - but the worldwide deception continues, automatically recycled by a compliant media.
The enduring feature of the TOS saga is that it provided a blueprint for the international scientific community. If even a theory as palpably bogus as the "toxic oil" syndrome can be sustained internationally, then suppressing the truth must be remarkably straightforward. All it takes is a series of epidemiological reports, accredited by scientists of a similar persuasion, and then published in reputable scientific journals. There are, as Disraeli might have said, lies, damned lies and peer-reviewed scientific papers.
Given increasing privacy constraints, the media can never independently verify the data, and just have to report whatever they are told.
Moreover, we could discover the truth about the Spanish epidemic for two reasons: because the two-year trial ensured that otherwise unavailable information reached the public domain (and the authorities haven't made that mistake again); and because I was able, in 1990, to spend almost three months in Spain researching and filming the epidemic. A decade later, it is now inconceivable that journalistic investigations on such a scale would be supported. In future, without even the remote possibility of a bunch of journalists turning up years later to ask inconvenient questions, it will be even easier for international science to organise its cover-ups.
An internal German government memo was recently leaked to Der Spiegel. According to this, the monitoring of imported produce had revealed that there continued to be unsafe pesticide residues on fruit and vegetables from Spain. Some peppers were "highly contaminated" and the residues had "reached levels we can no longer tolerate". It was the last line of the memo that was most telling: "Under no circumstances should the general public be informed."
The Guardian, May 3, 2003
Too clever too fast too happy
Almost without our noticing, scientists have reached a point where they can not only clone human beings, they can fine-tune genes in embryos to produce a super race. If we let it happen, argues Bill McKibben, the consequences will be terrifying: the end of meaning, the end of what makes us human. Time to say enough
For the first few miles of the marathon, I was still fresh enough to look around, to pay attention. I remember mostly the muffled thump of several thousand pairs of expensive sneakers padding the Ottawa pavement. But as the race wore on, the herd stretched into a long string of solitary runners. Pretty soon each of us was off in a singular race, pitting one body against one will. For months I'd trained with the arbitrary goal of three hours and 20 minutes in my mind. Which is not a fast time, but it would let a 41-year-old into the Boston marathon. And given how fast I'd gone in training, I knew it lay at the outer edge of the possible.
By about, say, mile 23, two things were becoming clear. One, my training had worked: I'd reeled off one 7:30 mile after another. Two, my training wouldn't get me to the finish by itself. With every hundred yards the race became less a physical test and more a mental one. Someone stronger passed me, and I slipped on to her heels for a few hundred crucial yards, picking up the pace. The finish line swam into my squinted view, and I stagger-sprinted across. With 14 seconds to spare.
A photographer clicked a picture, as he would of everyone who finished. I was a cipher to him - a grimacing cipher, the 324th person to cross, an unimportant finisher in an unimportant time in an unimportant race. It mattered not at all what I had done. But it mattered to me. When it was done, I had a clearer sense of myself, of my power and my frailty. A marathon peels you down toward your core. It echoes in some small way what runners must always have felt - the Tarahumara Indians on their impossible week-long runs through the canyons of the south-west, the Masai on their game trails. Few things are more basic than running.
And yet it is entirely possible that we will be among the last generations to feel that power and that frailty. Genetic science may soon offer human beings, among many other things, the power to bless their offspring with a vastly improved engine. For instance, scientists may find ways dramatically to increase the amount of oxygen that blood can carry. When that happens, we will, though not quite as Isaiah envisioned, be able to run and not grow weary.
Attempts to alter the human body are nothing new in sport, of course. Athletes have been irradiated and surgically implanted with monkey glands; they have weight-trained with special regimens designed to increase mitochondria in muscle cells; they have lived in special trailers pressurised to simulate high altitudes. The Tour de France has been interrupted by police raids time and again; in 2001 Italian officials found a "mobile hospital", stocked with hormones, drugs and synthetic blood products, trailing the Giro d'Italia.
In other words, you could almost say that it makes no difference whether athletes of the future are genetically engineered - that the damage is already done with conventional drugs, the line already crossed. You could almost say that, but not quite. Both athlete and fan remain able to draw the line in their minds: no one thought Ben Johnson's 1988 100-metre record meant anything once the Olympic lab found steroids in his system. It was erased from the record books. Against the odds, sport just manages to stay "real".
But what if, instead of crudely cheating with hypodermics, we began literally to programme children before they were born to become great athletes? Muscle size, oxygen uptake, respiration - much of an athlete's inherent capacity derives from her genes. What she makes of it depends on her heart and mind, of course, as well as on the accidents of where she's born, and what kind of diet she gets. And her genes aren't entirely random: perhaps her parents were attracted to each other in the first place because both were athletes. But all those variables fit within our idea of fate. Flipping through a catalogue at a clinic for athletic genes does not; it's a door into another world.
And as we move into the new world of genetic engineering, we won't simply lose races, we'll lose racing : we'll lose the possibility of the test, the challenge, the celebration that athletics represents. Say you've reached mile 23 of a marathon, and you're feeling strong. Is it because of your training and your character, or because the gene pack inside you is pumping out more red blood cells than your body knows what to do with? Will anyone be impressed with your dedication? More to the point, will you be impressed with your dedication?
"Genetics" is not some scary bogeyman. Most of the science that stems from our understanding of DNA is marvellous - cancer treatments, for example. But one branch of the science raises much harder questions. In fact, it raises the possibility that we will engineer ourselves out of existence.
The essential facts are as follows. Genes reside in the spherical nucleus of each cell of a plant or animal; from that post they instruct the cells to make particular proteins. Those proteins, in turn, key the cell to grow or stop growing, tell it what shape to take, and so on. Grow hair. Make more dopamine. Right up until this decade, the genes that humans carried in their bodies were exclusively the result of chance - of how the genes of the sperm and the egg, the father and the mother, combined. The only way you could intervene in the process was by choosing who you would mate with - and that was as much wishful thinking as anything. But that is changing.
We now know two different methods to change human genes. The first, and less controversial, is called somatic gene therapy. This begins with an existing individual - someone with, say, cystic fibrosis. Researchers try to deliver new, modified genes to some of her cells, usually by putting the genes aboard viruses they inject into the patient, hoping that the viruses will infect the cells and thereby transmit the genes. If the therapy works, the proteins causing the cystic fibrosis should diminish, and with them some of the horrible symptoms.
Somatic gene therapy is, in other words, much like medicine. As our understanding of the human genome grows, somatic gene therapy may become more effective. It's not a silver bullet against disease, but it is a bullet nonetheless, one more item of ordnance in the medical arsenal.
"Germline" genetic engineering, on the other hand, is something very novel indeed. "Germ" here refers not to microbes, but to the egg and sperm cells, the "germ" cells of the human being. Scientists intent on genetic engineering would probably start with a fertilised embryo a week or so old. They would tease apart the cells of that embryo and then, selecting one, they would add to, delete or modify some of its genes. They could also insert artificial chromosomes containing predesigned genes. They would then take the cell, place it inside an egg whose nucleus had been removed, and implant the resulting new embryo inside a woman.
The embryo would, if all went to plan, grow into a genetically engineered child. His genes would be pushing out proteins to meet the particular choices made by parents, and by the companies and clinicians they were buying the genes from. Instead of coming solely from the combination of his parents, and their parents, and so on back through time, those genes could come from any other person, or any other plant or animal, or out of the thin blue sky. And once implanted they will pass to his children, and on into time.
We began doing this with animals (mice) in 1978, and we've managed the trick with most of the obvious mammals, except one. And the only thing holding us back is a thin tissue of ethical guidelines, which some scientists and politicians are working hard to overturn.
The reason for performing germline genetic engineering is to "improve" human beings - to modify the genes affecting everything from obesity to intelligence, eye colour to grey matter. And to make germline engineering work, you need one more piece of technology: the ability to clone people.
The technique of modifying genes is hard; the success rate is low. It's very difficult to get a desired new gene into a fertilised egg on a single try. If you had more embryos, your odds would improve. That's what the people who cloned Dolly the sheep were aiming for: easy access to more embryos so they could "transform" the animals. "Without cloning, genetic engineering is simply science fiction," says the Princeton biologist Lee Silver. "But with cloning, genetic engineering moves into the realm of reality."
It's not as if human cloning is far off, or impossibly difficult. A few flimsy pieces of legislation prevent "reproductive" cloning in most (but not all) western nations. It will require one large change in our current way of doing business - instead of making babies by making love, we will have to move conception to the laboratory. You need to have the embryo out where you can work on it to make the necessary copies, try to add or delete genes, and then implant the one that seems likely to turn out best.
"Ultimately," says Michael West, CEO of Advanced Cell Technology, the firm furthest out on the cutting edge of these technologies, "the dream of biologists is to have the sequence of DNA, the programming code of life, and to be able to edit it the way you can a document on a word processor."
In some ways, the sequencing of the human genome, heralded as the dawn of the genetic age, may really have marked the sunset of a certain kind of genetic innocence. Instead of finding the expected 100,000 genes, the two teams of competing researchers managed to identify just 30,000. This total is still being debated, but whatever the final count, we have barely twice as many genes as the fruit fly, and only slightly more than the mustard weed.
Meanwhile, those 30,000 genes, though "sequenced", were not understood. Imagine copying the works of Shakespeare by stringing all the words together without spacing or punctuation marks, said the biologist Ruth Hubbard. Then imagine handing that manuscript "to someone who doesn't know English". And the traits that might interest us most - intelligence, aggression - are probably the most complicated and hidden.
Plenty of practical complications make this work harder than editing text on a word processor, too. One researcher told of 300 attempts to clone monkeys without success. Even if you could perfect the process, simple physics would place limits on how much you could modify humans. "If you had a nine-foot-tall person," says Stuart Newman, a researcher at New York Medical College, "the bone density would have to increase to such a degree that it might outstrain the body's capability to handle calcium."
However, these qualifications mask the larger truth: genes do matter. Endless studies of twins raised separately make very clear that virtually any trait you can think of is, to some degree, linked to our genes. Stuart Newman, a few moments after explaining why a nine-foot-tall person simply wouldn't work, leaned across his lab bench and added, "But could you engineer higher intelligence? Increased athletic ability? I have no doubt you could make such changes." This new world can't be wished away.
The new technology is growing and spreading as fast as the internet grew and spread. One moment you've sort of heard of it; the next moment it's everywhere. Consider what happened with plants. A decade ago, university research farms were growing small plots of genetically modified grain and vegetables. Sometimes activists would rip the plants up, one by one. Then, all of a sudden in the mid-1990s, before anyone had paid any real attention, farmers had planted half the corn and soya bean fields in America with transgenic seed. Since 1994, US farmers have grown 3.5 trillion genetically manipulated plants.
Or consider animals. Since they first cloned frogs 30 years ago, researchers have learned to make copies of almost everything. And animal cloning is moving steadily from the lab to the factory - the techniques are now reliable enough to let scientists scale up production. You can order cloned cattle over the net. Early in 2002, a California company debuted a chip that automates the process of nuclear transfer, the key step in cloning. A North Carolina firm has figured out a similar process for "bulk-growing" chicken embryos, which may soon allow "billions of clones to be produced each year to supply chicken farms with birds that all grow at the same rate, have the same amount of meat, and taste the same". These same technologies could be used to mass-produce human embryos: "Obviously it would make everyone's life easier," said a spokesman for Advanced Cell Technology, the pioneer in human cloning research.
In 1963, JBS Haldane, one of the 20th century's great biologists, said he thought it would be a millennium before the human genome could be manipulated. He appears to have been off by about 960 years - but then, nearly every guess about this work has been too conservative.
And it's not just the research that's accelerating, but the commercialisation: in 1980 it cost $100 to sequence a single base pair of genes; the price is now counted in pennies. As we learn more about the human genome, we also get ever better at the mechanics of handling embryos. Here's a startling statistic: some fertility clinics have become so handy at in vitro fertilisation that the women they treat "now have a better chance of getting pregnant in one cycle than fertile women relying on plain old-fashioned sex". Which means, since the technology is not so different, that cloning a human being poses no enormous technical hurdle.
In fact, some experts are convinced that it may already have happened. Michael Bishop, CEO of the animal cloning firm Infigen, described an off-the-record meeting of cloning experts at Cold Spring Harbor, the lab presided over by the DNA pioneer James Watson. "One evening after dinner, some of us were talking, and there was not one of us who believed it had not already happened," he said. "It is too easy. Too bloody easy."
But cloning is just the warm-up act. The main event is germline genetic engineering: not just copying but changing, as we've done with plants and animals.
Ethical guidelines promulgated by the scientific monitoring boards so far prohibit actual attempts at human germline engineering, but researchers have walked right up to the line, maybe even stuck a toe over it. In 2001, for instance, a fertility clinic in New Jersey impregnated 15 women with embryos fashioned from their own eggs, their partner's sperm, and a small portion of an egg donated by a second woman. The procedure was designed to work around defects in the would-be mother's egg - but at least two of the resulting babies carried genetic material from all three "parents". This wasn't germline modification in the precise sense - a deliberate attempt to alter traits in the resulting child - but it demonstrates how close we've come with almost no one noticing.
In the autumn of 1998, a year after Dolly the sheep was cloned, another animal emerged that may prove more significant in the long run. Lucy, a black-brown mouse birthed in the Vancouver labs of Chromos Molecular Systems, had an extra pair of chromosomes: artificial chromosomes. She passed them on to her children, and they to theirs. An artificial chromosome makes germline manipulation much, much easier; instead of having to peer through a microscope at an embryo, snipping and splicing the existing DNA in an effort to add, say, a few inches to the resulting child, a lab worker could simply insert the prepackaged chromosome. It's the difference between a homemade cake and a cake mix from the store, multiplied a thousand times. "It promises to transform gene therapy from the hit-and-miss methods of today into the predictable, reliable procedure that human germline manipulation will demand," says UCLA's Gregory Stock.
Meanwhile, researchers in Britain and California have produced "designer sperm"; others at Cornell have produced an "artificial womb lining" and hope to have "complete artificial wombs" within a few years.
And so here's where we are: the genetic modification of humans is not only possible, it's coming fast; a mix of technical progress and shifting mood means it could easily happen within the next few years. But we haven't done it yet. For the moment we remain, if barely, a fully human species. And so we have time yet to consider, to decide, to act.
Some of the first germline interventions might well be semi-medical, aimed at eliminating what Lee Silver calls "predispositions" toward conditions such as obesity. One researcher said, "I did an inventory of myself and discovered that I carry eight nuisance genes. Obviously I am nearsighted - you can tell by my eyeglasses. I have dry skin. I also have a hearing defect in which I have virtually zero memory for music. Wouldn't it be nice if these genes didn't have to be carried forward to my descendant?"As that list makes clear, the line between fixing problems and "enhancing" offspring is meaningless: almost as soon as you begin, you're worrying about conditions (such as the ability to remember tunes) that would never have crossed Hippocrates' mind.
Indeed, sheer handsomeness is likely to be one of the earliest aims of genetic intervention. Once you accept the idea that our bodies are essentially plastic, and that it's OK to manipulate that plastic, then, in the words of Silver, "there's nothing beyond tinkering". There's not a feature of the human body that can't be "enhanced" in some way or another. "If something has evolved elsewhere, then it is possible for us to determine its genetic basis and transfer it into the human genome," says Silver - just as we have stuck flounder genes into strawberries to keep them from freezing, and jellyfish genes into rabbits and monkeys to make them glow in the dark.
The list of possibilities is as long as the imagination. Some plump for eyes in the back of the head on the theory that it would "make driving safer". Others are more interested in reducing the need to sleep. Half the people I know obsess about getting pudgy. My point is merely that our bodies, or more precisely the bodies of our children, which have always seemed to us more or less a given, are on the verge of becoming true clay. And not just our children's bodies, but their minds as well. We are starting to catalogue which genes control intelligence, and starting to figure out how to manipulate them. News of such research makes most of us uncomfortable. In part that's because every racist and xenophobe since the dawn of time has claimed some link between ancestry and aptitude. However, various specialists have marshalled data from twin and adoption studies to show that anywhere from 40% to 75% of variation in intelligence was inherited, the product of nature and not nurture. A special issue of American Psychologist published in the aftermath of the furore found a broad agreement among researchers that half of the variation in human intelligence appears to be related to heredity.
Half is not all, of course. And IQ is not the same as ability. But IQ tracks uncomfortably close to success - to the kinds of grades you get, and how long you stay in school, and what kind of job you hold, and how much money you make. The correlation is strong enough so that you could argue it might make sense to soup up your child, for either her sake or the planet's. The idea's in the air: "As society gets more complex, perhaps it must select for individuals more capable of coping with its complex problems," says Daniel Koshland, a former editor of Science, America's most prestigious scientific journal. "If a child destined to have a permanently low IQ could be cured by replacing a gene, would anyone really argue with that? It is a short step from that decision to improving a normal IQ. Is there an argument against making superior individuals?" There is, I think - but it is an argument that will be made against the odds.
Just as our list of potential modifications of the body began with the relatively obvious and spiralled off toward the fantastic, so with the mind. By now, as good members of the Prozac generation, we're pretty comfortable with the notion that mood is a function of chemistry, and hence, in some measure, of the genes that control that chemistry. Researchers at the National Institutes of Health, for instance, have found a stretch on chromosome 17 that predisposes people to anxiety. Other researchers are hot on the trail of a human "happiness gene": at the moment they're concentrating on the gene for the dopamine D4 receptor. An Israeli group found that certain variations of the gene made people more likely to seek out novelty - and more likely to answer yes to statements such as, "I am a cheerful optimist." Such hardwiring may "determine our average set-point" for happiness, the researchers argue, so that even "winning the Nobel Prize or marrying our childhood sweetheart may not alter our overall happiness - for that, gene therapy would be required."
In short, it's not particularly far out to imagine genetic engineering designed to make our children happier - a kind of targeted, permanent Prozac. Dean Hamer, chief of gene structure and regulation at the National Cancer Institute, imagines a future scenario in which a young couple, Syd and Kayla, get to tweak the emotional make-up of their foetus. "They pondered the choices before them, which ranged from the altruism level of Mother Teresa to the most cut-throat CEO. Typically, Syd was leaning toward sainthood; Kayla argued for an entrepreneur. In the end, they chose a level midway between, hoping for the perfect mix of benevolence and competitive edge. Syd and Kayla, however, did not want to set their child's happiness rheostat too high. They wanted her to be able to feel real emotions. If there was a death, they wanted her to mourn the loss. If there was a birth, she should rejoice."
By now, the vision of the would-be genetic engineers should be fairly clear. It is to do to humans what we have already done to salmon and wheat, pine trees and tomatoes. That is, to make them better in some way: to delete, modify, or add genes in the developing embryos so that the cells of the resulting person will produce proteins that make them taller and more muscular, or smarter and less aggressive, maybe handsome and possibly straight, perhaps sweet. Even happy. It is, in certain ways, a deeply attractive picture.
But suppose you're not ready. Say you're perfectly content with the prospect of a child who shares the unmodified genes of you and your partner. Say you think that manipulating the DNA of your child might be dangerous, or presumptuous, or icky? How long will you be able to hold that line if germline manipulation begins to spread among your neighbours? Maybe not so long as you think. "Suppose parents could add 30 points to their child's IQ," asks the economist Lester Thurow. "Wouldn't you want to do it? And if you don't, your child will be the stupidest in the neighbourhood." That's precisely what it might feel like to be the parent facing the choice. Deciding not to soup your kids up... well, it could come to seem like child abuse.
Of course, the problem is that if everyone's adding 30 IQ points, then having an IQ of 150 won't get you any closer to an elite university than you were at the outset. You might be able to argue that society as a whole was helped, because there was more total brainpower at work, but your kid won't be any closer to the top of the pack. All you'll be able to do is up the odds she won't be left hopelessly far behind.
With germline manipulation, you get only one shot; the extra chromosome you stick in your kid when he's born is the one he carries throughout his life. So let's say baby Sophie has a state-of-the-art gene job: her parents paid for the proteins discovered by, say, 2005 that, on average, yielded 10 extra IQ points. By the time Sophie is five, though, scientists will doubtless have discovered 10 more genes linked to intelligence. Now anyone with a platinum card can get 20 IQ points, not to mention a memory boost and a permanently wrinkle-free brow. So by the time Sophie is 25 and in the job market, she's already more or less obsolete - the kids coming out of college just plain have better hardware. The vision of one's child as a nearly useless copy of Windows 95 should make parents fight like hell to make sure we never get started down this path.
Another thing is clear - the rich would benefit from genetic engineering far more than the poor. And the gap in power, wealth and education that currently divides both our society and the world at large would be written into our very biology. If we can't afford the 50 cents a person it would take to buy bed nets to protect most of Africa from malaria, it is unlikely we will extend to anyone but the top tax bracket these latest forms of genetic technology.
This injustice is so obvious that even the strongest proponents of genetic engineering make little attempt to deny it. The most revealing account of the divided future comes from Lee Silver, in his book Remaking Eden. "Emotional stability, long-term happiness, inborn talents, increased creativity, healthy bodies - these could be the starting points chosen for the children of the rich," he writes. "Obesity, heart disease, hypertension, alcoholism, mental illness - these will be the diseases left to drift randomly among the families of the underclass." He forecasts a future divided between "the GenRich" and "the Naturals". The former "all carry synthetic genes", and they control "all aspects of the economy, the media, the entertainment industry". The latter work as "low-paid service providers and labourers".
"There is still some intermarriage as well as sexual intermingling between a few GenRich individuals and Naturals," Silver imagines, but "GenRich parents put intense pressure on their children not to dilute their expensive genetic endowment in this way." And indeed, eventually, they become "entirely separate species, with no ability to cross-breed, and with as much romantic interest in each other as a current human would have for a chimpanzee".
This is assuming the programming worked well: it might not. Let's start with a pig, Pig 6707, reared at a US Department of Agriculture research centre. Scientists inserted a human growth gene into the porker while it was yet an embryo, hoping he would turn into a veritable mountain of bacon. But instead of simply causing him to get larger, "the human genetic material altered the pig's metabolism in an unpredictable and unfortunate way. Excessively hairy, lethargic, riddled with arthritis, apparently impotent and slightly cross-eyed, the pig could hardly stand up." So the question arises: if you start genetically engineering children, might you not get some excessively lethargic, obese and hairy people, too?
For the moment, even the most enthusiastic advocates of germline manipulation agree that for the moment it's still too risky; the National Bioethics Advisory Commission ruled in 1997 that such work "is not safe to use in humans at this time". But the key words in that ruling were "at this time". Zeal or profit might well force the issue. "The mere fact that there may be unanticipated or long-term side effects will not deter people from pursuing genetic remedies, any more than it has in earlier phases of medical development," predicts Francis Fukuyama. In other words, don't count on the inherent riskiness of this work - the fact that it will necessarily involve experimentation on people - to bring it to a halt.
None of these arguments quite captures the truly horrifying aspects of this new technology. There is another issue, neither utilitarian nor religious in the usual sense. It is an argument about meaning.
What will you have done to your newborn when you have installed into the nucleus of every one of her billions of cells a purchased code that will pump out proteins designed to change her? You will have robbed her of her chance of understanding her life. Say she finds herself, at the age of 16, unaccountably happy. Is it her being happy - finding, perhaps, the boy she will first love - or is it the corporate product inserted within her when she was a small nest of cells, an artificial chromosome now causing her body to produce more serotonin?
If your child is designed to be sweet-tempered, social and smart, what can she take pride in? Her good grades? She may have worked hard, but she'll always know that she was spec'd for good grades. Her kindness to others? Well, yes, it's good to be kind - but perhaps it's not much of an accomplishment once the various genes with some link to sociability have been catalogued and manipulated.
We may be the last generations able to ponder these dilemmas. In the words of Richard Hayes, one of the leading crusaders against germline manipulation: "Suppose you've been genetically engineered by your parents to have what they consider enhanced reasoning ability and other cognitive skills. How could you evaluate whether or not what was done to you was a good thing? How could you think about what it would be like not to have genetically engineered thoughts?"
In other words, by then you will have turned your child into an automaton of one degree or another; and if it only sort of works, you will have seeded the ground for a harvest of neurosis and self-doubt we can barely begin to imagine. If "Who am I?" is the quintessential modern question, you will have guaranteed that they will never be able to fashion a workable answer. There's another twist to bear in mind. If the engineering works as intended, the offspring will be superior to their parents. With a higher IQ, or a more manageable temper, or a better ear, or quicker reflexes. Not "better" as when a son grows in strength while his father declines, but categorically better, of a higher order. Different. One reason we love and nurture our kids, or so the biologists tell us, is from an inarticulate desire to pass along our genes. But these won't be our genes precisely; they'll belong as much to whatever multinational created them. Children will in some sense be of a different species, or at least a different strain.
Though our lives in the developed world are easy enough by comparison with lives in other places and other eras, challenges remain. Or, as when we run marathons, we can invent them. Our parents try to draw us maps, which we can follow slavishly, burn in the fires of our rebellion, or glance at from time to time as we chart our own courses. But these new technologies show us that human meaning dangles by a far thinner thread than we had thought. What if the ending to our story is already written, our compass already set? What if we have been programmed, or at least must suspect each time we choose a path that we have been nudged in that direction by our engineered cells? Who then are we?
If germline genetic engineering ever starts, it will accelerate endlessly and unstoppably into the future, as individuals make the calculation that they have no choice but to equip their kids for the world that's being made. If the technology is going to be stopped, it will have to happen now, before it's quite begun. The choice will have to be a political one, that is a choice we make not as parents but as citizens.
We need to do an unlikely thing. We need to survey the world we now inhabit and proclaim it good. Good enough. Not in every detail; there are a thousand improvements, technological and cultural, that we can and should still make. But good enough in its outlines, in its essentials. We need to decide that we live, most of us in the west, long enough. We need to declare that, in the west, where few of us work ourselves to the bone, we have ease enough. In societies where most of us need storage lockers more than we need nanotech miracle boxes, we need to declare that we have enough stuff. Enough intelligence. Enough capability. Enough.
Right now our technology is advanced enough to make us comfortable, but not so advanced that it has become us. We have enough insight from Darwin and Freud and Watson and Crick to allow us to understand some of what drives us, but we're not yet completely reduced to hardware. We have Prozac for the incapacitated and pain-ridden, but it's not encoded in our genes. We have enough medicine to give most of us a good shot at a long life, but not so much as to turn us into robots. We are suspended somewhere between the prehistoric and the Promethean. Closer to the Promethean. Close enough.
Since the mid-1950s, pollsters have annually asked Americans if they are happy with their lives. The numbers who say yes have declined slowly but steadily, even as technology has dropped more conveniences from the sky. Researchers have found that people expect material progress to increase, and "inner happiness" or "peace of mind" to decrease. "The results of such surveys," says the philosopher Nicholas Rescher, "indicate that in fact a substantial majority of people believe there is a negative correlation between progress and happiness." But can we, even if we want to, actually rein in these technologies? Can the opposition to them ever be more than academic?
If governments tried to outlaw such work, advocates warn, it would "only force the research underground, making it impossible to monitor and regulate". The only alternative, they insist, is a police state; if you don't want Brave New World, you get 1984. The argument is strong. Our success with prohibitions is mixed at best. Americans drank throughout the 1920s, and they smoke dope today. Economic sanctions often leak. Commercial pressures often trump wise policy-making. And so on. But these are arguments, not proofs - they don't guarantee that widespread use of these technologies is inevitable, merely that it is likely.
We have a surprisingly good track record in recent decades at just the sort of control needed. Take nuclear weapons, for instance, one of the first technologies that threatened to erase meaning (and everything else). Since Hiroshima and Nagasaki, scientists, diplomats and many ordinary folk have fought to rein in nuclear weapons. They've succeeded in certain respects (the superpowers have begun to shrink their arsenals) and failed in others (the weapons have spread to new nations), but so far the bomb's been dropped just twice. This analogy is imperfect: almost no one wants nuclear war, for example, while some people surely will want to programme their children.
But we've also turned our backs on more popular technologies. DDT, for instance: when it was pointed out that it was ravaging wildlife, people eventually overcame the big money behind it and enacted an effective ban. Or consider genetically modified crops. It's true that corn and soya spread quickly across the American grain belt with hardly anyone noticing; corporate power made sure that Washington wouldn't regulate the new varieties.
But across Europe, consumers began to say in large numbers that they simply wouldn't eat the stuff. Grocery distributors stopped buying it. The growth of GM foods was slowed, and in some cases reversed.
The idea of inevitability is a ruse, an attempt to pre-empt democratic debate. The would-be genetic engineers wear the cloak of "progress", and if it's a little more tattered now than in the past, it's still pretty impressive. They represent a technology that could make large amounts of money. In the laissez-faire economic world we now inhabit, they can go right ahead if no one says, "Stop."
On the other hand, they must contend with a gut revulsion at the prospect of "designer children". Poll after poll shows that people don't like cloning, and that to the extent that they understand human germline engineering, they like that even less. All the European nations have already explicitly banned germline manipulation; the United States, as Richard Hayes puts it, is the "rogue nation" on these questions.
It's possible to imagine a politics emerging that takes technology seriously. A politics that over time generates the net of regulations, and hence of taboos, that keeps us more or less human. We'll never win a permanent victory over these technologies - just as the strongest treaty won't make physicists forget how to build nukes, so germline engineering will always be out there, tempting us. A new part of what it means to be human is to live with these possibilities, and to guide, direct and, when necessary, foreclose them.
We'd have to start considering more carefully what we owe to society (which is to say what we owe to children in general, and to the future) as distinguished from what we owe to our own individual children in our own particular moment. Over time, this politics will let us say, "This far and no farther."
© Bill McKibben, 2003.This is an edited extract from Enough: Genetic Engineering And The End Of Human Nature, by Bill McKibben, to be published by Bloomsbury on June 2 at £17.99.
The John Snow East-West Sleza