Homeopathy: The therapy that dare not speak its name?

Herbs And Bottle With Medicines. Concept Homeopathy.

According to modern pharmacology, there can be no biochemical basis for homeopathy because substances diluted out of existence should not have any effects, let alone therapeutic ones. So to practice or partake of homeopathy is tantamount to believing that nothing has the power to do something, i.e., homeopathy is magic. Although many homeopaths would no doubt agree with this statement, it is the reason for some scientist’s near-apoplexy at the mere mention of homeopathy. At best, they say, it can be no more than a glorified ‘placebo effect’, at worst, systematic and systematised quackery. And with the increasing hegemony of molecular pharmacology within conventional medical practice, it is perhaps not surprising if, once again in their 200 year history, homeopaths could be forgiven for feeling that theirs is a therapy which dare not speak its name.

This view might be understandable given that homeopathy’s basic tenets, particularly the principle of potentisation (i.e., that the more a substance is diluted and violently agitated, the more potent it becomes), seem to directly fly in the face of some of science’s most cherished beliefs. Yet for over two centuries, homeopaths have claimed success in prescribing for their patients, remedies with little or nothing of the original substance left in them. And many of these patients are reported to have recovered and achieved good health. Could these admittedly anecdotal results actually all be down to the placebo effect, or is there something fundamental about the potentisation process, or indeed, water itself, that has yet to be fully appreciated by science? To put it another way, is a homeopathically potentised substance objectively and measurably different from the same substance merely diluted out of existence, or is a homeopathic remedy something that is intrinsically and inextricably entangled with the relationship that develops and exists between the practitioner and the patient? Or, indeed, is homeopathy unique among the healing arts in being a strange complementary brew of both these contrasting views? Interestingly, while increasingly beset by the scepticism of modern biochemistry and pharmacology, the homeopathic community has done itself no ideological favours by splitting roughly along the scientific and phenomenological fault lines just mentioned. The purpose of this article is to map the development of these different strands of homeopathic thought and attempt to see where they are heading.

Samuel Hahnemann, the founder of homeopathy, began his work during the late 18th century, at a time of great change in Western thought. Though atomistic ideas about matter had been speculated on since the time of the ancient Greeks, the actual nature and interactions of these atoms with each other were unknown. This began to change during Hahnemann’s lifetime. As a trained medical practitioner, Hahnemann was well aware of the extreme toxicity of some of the so- called curative remedies used by his colleagues at that time. His feelings of disgust at this situation led him to quit medicine and over time to develop his own ideas that led to homeopathy. The principle of similars was one of his first discoveries (actually, rediscovery: the idea that like cures like’ was well known to Paracelsus in the 16th century and to the ancient Greek physicians). Thus, someone suffering from malaria-type symptoms would be given a dose of cinchona or Peruvian bark. What Hahnemann discovered was that cinchona produces precisely these symptoms in a healthy individual. In order to avoid the toxic aspects of a remedy, Hahnemann would continuously dilute it. Naturally enough, he soon discovered that as well as mitigating a remedy’s poisonous qualities, dilution also seemed to banish its curative aspects. However, combining dilution with agitation, Hahnemann discovered (purely by chance, he claimed), had the opposite effect. Far from reducing the remedy’s curative powers, potentisation, as this combination of dilution and agitation is known, ‘released’ them. Typically, a homeopathic preparation was either first digested in alcohol or, if insoluble in this spirit, triturated (i.e., finely ground) with an ‘inert’ material like lactose. Then the alcohol solution, or lactose triturate, was further diluted ten or one hundred-fold, accompanied by agitation or triturated if a solid. This would give a ID (i.e., one tenth) or IC (i.e., one hundredth) solution or triturate. The process of dilution and agitation or trituration was repeated to give a 2D or 2C. In the case of trituration, a point is reached, usually by 6D or 3C, when the triturate will dissolve in alcohol. Then it is treated just as the alcohol solutions.

After each dilution step, the solution is violently agitated – succussed – usually by banging the tube containing the solution against a hard surface. This process was sequentially repeated to give the required higher potency of homeopathic preparation. To this day, this is the main process by which remedies are made. It has to be remembered that the more dilute the remedy and the more it is succussed, the higher is its homeopathic potency. Really high potencies of200C or more are usually performed using special potentising machines.

At the same time as Hahnemann was proceeding down the homeopathic path, however, natural philosophers (the term ‘scientist’ was not invented until later on in the 19th century), like John Dalton and Amadeo Avogadro were at last getting to grips with the atomic and molecular constituents of matter. Avogadro speculated that equal volumes of gases at the same temperature and pressure contained equal numbers of molecules. Though a hypothesis and not a theory, this permitted Avogadro to conclude that the chemical formula of water, say, was H20 and not HO, and allowed scientists to begin to probe the very atoms and molecules of different substances. Although, the importance ofAvogadro’s hypothesis was not fully realised at the time, later scientists built on his work. They came up with a figure, named after Avogadro, for the number of atoms or molecules contained within a mass of substance equal to its atomic or molecular weight in grams. Thus, the atomic weight of carbon is 12, so 12 grams of carbon contain Avogadro’s number of carbon atoms: the molecular weight of hydrogen is 2 so 2 grams of hydrogen contain the same Avogadro’s number of hydrogen molecules. This number is huge – about 6 times 10 23 – but it is the same for any substance, regardless of its atomic or molecular constituents, and is one of the cornerstones of modern science. For the first time, it fully entrenched the notion of matter consisting of fundamental building blocks. So, no building blocks, no matter.

When the insights gained from Avogadro’s and later scientists’ discoveries about matter were directed at Hahnemann’s homeopathy, it quickly became apparent that at some of the potencies used by homeopaths, there could not be any atoms or molecules of the original substance left in the remedies. Consequently, as there was no substance in the remedies, neither could there be in homeopathy.

Hahnemann, however, was not overly concerned about the conclusions of early scientists and their theories about atoms. His argument was that potentisation released the ‘spirit-like’ inner content of remedies’ and this was above and beyond their existence in the material world as atoms. As such, this exemplified the dichotomy that existed between the emerging rationalism of modern science and the ancient vitalistic tradition on which homeopathy was based. It has been one of the main problems in Western thought since the ancient Greeks and still exercises people’s thinking today, i.e., the relationship between what was thought of as inert matter and that which energises it.

Ironically, this was mirrored in a similar split within science itself. By the end of the 19th century, scientists believed that the universe consisted of matter, which was inert, composed of discreet atoms, localised in space and time, and governed by strictly deterministic laws, such as cause and effect, classical mechanics, and thermodynamics. Conversely, there was energy that drove matter, and radiation (e.g., light, governed by the laws of electrodynamics), which was thought of as consisting of waves permeating the whole of space. Observation of this state of affairs, it was thought, changed nothing and apart from a few odd inconsistencies, there were no real problems left for scientists to solve.

One of these ‘odd inconsistencies’ lay in the exact nature of the relationship between matter and energy. It was trying to understand this that led to the undermining of the certainties of deterministic classical science and ushered in the quantum and relativistic revolutions of the 20th century. These asserted that matter and energy are equivalent and interchangeable with each other (at least at the atomic and molecular scale of things), and that both have wave-like as well as particle-like properties. Coupled with the realisation that ultimately, there is an in-built indeterminacy about all observations, physicists realised that they had to abandon notions that purport to provide knowledge of the universe itself in favour of physical theories that tell us only what can be known about the universe. In other words, human experience, knowledge, and its limitations have now to be factored into fundamental theories about the universe, as if our very consciousnesses are directly entangled with how we physically represent the world

These conclusions that physicists have drawn mean that we can no longer consider ourselves in isolation from that which we are observing. But most importantly, at some fundamental level, it means that everything is inextricably and instantaneously interlinked in a vast and complex matter-energy network that transcends everyday notions of space and time. This is called Einstein-Podolsky- Rosen or EPR entanglement or more compactly, ‘non-locality’ and is a view that many physicists still have trouble comprehending. In the old classical or local’ view of the universe, things were imagined to operate independently of each other and apparently in isolation in our everyday light-speed limited space and time. In fact, this is the way that science in general and physics in particular, is still taught. We shall return to non- locality later. Meanwhile, it is sufficient to observe that non-locality has yet to fully percolate down through all branches of science.

Although it is the dominant paradigm, science still has no monolithic theoretical structure. This is because the different branches of science do not ‘sing’ from the same theoretical hymn sheet. Thus, while physicists benefit from up-to-date and sophisticated ideas based on modern quantum mechanics, relativity and complexity theories, these have yet to fully inform the biomedical sciences, whose theories are still largely steeped in the overly simplistic determinism of the 18th and 19th centuries. If homeopathy has any scientific explanation, biomedicine with its dependence on older deterministic scientific paradigms, is probably not the best place to go looking for it.

Modern non-deterministic quantum mechanics and complexity theory, on the other hand, could very well provide insights into Samuel Hahnemann’s science that it would be foolish and ultimately dangerous for homeopaths to ignore. We shall examine some of these in due course. Meanwhile, it is worthwhile noting that homeopathy, like science, is also a broad church.

Thus, homeopaths may be broadly categorised into two types: ‘medical’ and ‘professional’ (or lay) homeopaths. The former have medical experience as conventional doctors – they then go on to study homeopathy: the latter generally have no formal medical training. As a rough rule of thumb, most (but by no means all) medical homeopaths tend to use homeopathy within the confines of the conventional medical model (i.e., treating the body as a collection of interacting parts), whereas professional homeopaths tend to take a more holistic view of human beings (e.g., they ascribe importance to a patient’s emotional and mental states, as well as their physical symptoms). This broad classification of homeopathic taxonomy is complicated by the fact that the class of professional homeopaths subdivides into classical (i.e., single remedy) prescribers and polypharmacists (i.e., homeopaths who will prescribe more than one remedy). And it has to be said that there have been times when relationships between these various sects have been much less than cordial. To understand this sectarianism, one needs a little homeopathic history.

Hahnemann’s great opus on homeopathy, The Organon of Medicine, ran to six editions, each one an improvement on its predecessor. Close scrutiny of these various editions shows that he was experimenting and innovating throughout his life. In particular, between the 5th and 6th editions, it is clear that Hahnemann had undergone a sea change in his attitude to homeopathic prescribing. Thus, in the 5th edition, Hahnemann is strongly advocating single remedy prescribing for a very good reason; it makes tracking the effect of the remedy on the patient over time, much easier to monitor. In the 6th edition, however, ever conscious of the needs of his patients, we see Hahnemann developing new ways of prescribing, some of which entailed the use of more than remedy.

Unfortunately, Hahnemann died between publication of the 5th and 6th editions of The Organon, and his second wife held up publication of the 6th for quite some time after his death. So it was the 5th edition that informed homeopathic thinking, in particular that of arguably the most influential homeopath after Hahnemann, Dr James Tyler Kent. A follower of the esoteric teachings of Emanuel Swedenborg, Kent more than anyone preached the doctrine of the single remedy as prescribed by Hahnemann in the 5th edition of the Organon. Kent also made huge contributions to the philosophy guiding homeopathy, carving out the basis of what is now called classical homeopathy.

Kent has so dominated homeopathic thinking that it is only in the second half of the last century that Hahnemann’s later innovations of the 6th edition – polypharmacy, LM potencies – began to be taken seriously. The differences between the two editions’ styles of prescribing, is one of the main reasons for disagreements within the professional homeopathic community. Classical homeopaths adhere scrupulously to Hahnemann’s 5th edition of The Organon and Kent’s dire warnings against using more than one remedy and the perils of not waiting to observe its action on the patient. Any other prescribing strategy, the classical homeopaths argue, can confuse the case and will not lead to overall cure. This process can take some time as the remedy works with and through the patient’s economy. Though logical and thorough (and ultimately in the patient’s best interests), it can cause the patient some suffering (from aggravation or non-alleviation of original symptoms) as he/she waits on the homeopath to judge when the time is right to continue or change treatment. Also human beings are probably more complex individuals than they were during the 19th century so that it may not be possible to find the one single remedy that covers all the patient’s symptoms on every level of their being. This is where the different prescribing styles developed since Hahnemann’s 6th edition of The Organon can make a big difference to the therapeutic outcome. As with most things in life, homeopathy can and has benefited from a little ecumenical eclecticism.

Whatever the arguments between homeopaths, they are as nothing compared to the gripe some scientists have with homeopathy. According to them, and for the reasons mentioned above, homeopathy should not work, therefore it cannot work. But if it is the potentised remedy itself that brings about cure then, like any therapeutic substance, there should be some objective way of testing its efficacy. This is usually attempted via biomedicine’s grand inquisitor: the double-blind placebo controlled (or DBPC) trial. Here, a specific indication of a homeopathic remedy (e.g., an allergy, etc) is investigated in a selection of volunteers, some with the indicated homeopathic remedy and others with a placebo. In the double blind trial, neither the volunteers nor the researchers conducting the trial know who has received the remedy and who the placebo. Many such trials have been performed on homeopathic remedies. This is not the place to report on their details: suffice to say that though they never seem to be completely conclusive, on the whole, the number of DBPC trials in favour of homeopathy seem to have exceeded those against.’ Nevertheless, whenever any of these trials appear to rubbish homeopathy, the cry from conventional scientists has been “We told you so!” But where DBPC trials show support for homeopathy, these conventional sceintists invariably call for more. Clearly, no evidence of bias here!

This has led to other attempts to show that substances potentised out of existence still exert influence in their own right (i.e., they are operating locally), and is the main source of conflict with conventional biomedical science. Indeed, controversial evidence in support of the potency principle (and perhaps some biochemical basis for homeopathy) has been seeping out of laboratories around the world since at least 1989.

Back then, the French scientist Jacques Benveniste, first suggested that water might somehow be able to retain the ‘memory’ of substances once dissolved in it, but have subsequently been diluted into oblivion. Benveniste was working with certain human white cells called basophils that are involved in allergic reactions. These possess tiny granules that contain substances, such as histamines, that stain with a special dye. The dye can be decolourised (which means the tiny granules become degranulated) by a substance called anti-immunoglobin E or algE, and so far this is standard biochemistry.

What Benveniste claimed so controversially was that basophil degranulation continues long after the algE is diluted and agitated out of existence. His conclusion, that the water which had contained the algE somehow retained a ‘memory’ even when all trace of it could no longer be found, was scoffed at by the scientific community. How, they asked, could liquid water (whose individual molecules arc changing their relative positions at a phenomenal rate) possibly retain any memory of substances once dissolved but now diluted out of existence? Impossible, they concluded, because the individual water molecules are constantly jiggling about. After publishing his results in Nature, the same magazine led a by-now famous witch hunt against Benveniste that, via the outrage caused to the French scientific establishment by his findings, ultimately cost him his labs, his funding and all scientific credibility. Four years later in 1993, a team from University College in London attempted to reproduce Benveniste’s work and failed. His status as a heretic seemingly assured, Benveniste argued the British team had misunderstood his experimental protocols, but nobody was listening.

The strange thing was that about the time Benveniste was being ‘outed’ as a scientific heretic in 1989, theoretical work by Italian physicists actually predicted the molecular mechanism of just such a water-memory effect using a quantum theoretical approach. This showed that given a large enough number of water molecules (about 1,015 – 1,017), the sum total of all the interactions between them leads to a state where they spontaneously self-organise into a single emergent whole called a coherent domain. In other words, the whole represented by this huge number of water molecules is greater than the sum of its individual molecular parts. What is more, it appears that such coherent domains could be triggered by the process of dilution and agitation as practiced by homeopaths. Such coherent behaviour amongst large numbers of interacting elements is not unknown in physics and is the basis, for example, of how lasers are thought to operate (in this case, the coherent behaviour originates with photons). But, again, nobody was really taking any notice. Then, in 1999, a pan- European research team, that included Prof. Madeleine Ennis of Queen’s University Belfast, performed a variation on Benveniste’s experiments.

Basophil degranulation leads to the production of large amounts of histamine, which, by negative feedback, curbs its own release. The pan-European experiment compared the inhibition of algE-induced basophil degranulation by ultra-highly diluted histamine solutions against control solutions of pure water.8 Four separate European labs were each sent test-tubes of pure water and some containing the histamine ultra- dilutions. Working blind, all four labs found the histamine ultra-dilutions inhibited basophil degranulation, just like histamine itself, and the results were statistically significant in three out of the four labs.

Ennis, a strong proponent of the intrinsic impossibility of homeopathy, admits having to suspend disbelief and has started looking for a rational explanation. She commented that if the findings of the pan-European experiment she was part of were repeated, then the whole of physics and chemistry might have to be rewritten. Not necessarily.

Although Benveniste had lost much of his funding and his laboratories after 1989, that hadn’t stopped him working. By the late 1990s, he had fleshed out his earlier ideas on the memory of water and was beginning to suggest that it might be possible to ‘write’ on water analogous to the way one electronically copies information onto floppy disks and CD ROMs. Such information storage would not show up by the usual methods of scientific investigation for precisely the same reason that chemical analysis of a disk will indicate vinyl plastic and ferric oxide but not the information stored on it. This idea of ‘writing’ on water may not be as far-fetched as it sounds as the role of water in originating, sustaining, and orchestrating the intricate biomolecular dance within living cells is only just beginning to be understood. “Which all goes to show that water has many more surprises in store for scientists to investigate. Here is one of the most recent examples.

A Korean team of chemists has shown that as substances dissolved in water become more dilute, the remaining molecules clump together to form aggregates of increasing size. As dilution increases, the clusters grow large enough even to interact with biological tissues. Some biochemists are now speculating that perhaps the less dilute remedies used by homeopaths (where traces of the original solute still exist), could work after all.

Thus, it would not be too bold to suggest that perhaps the principle of potentisation is beginning to see some validation within science. Nevertheless, the signs so far seem to point to this having had little impact on the larger scientific community. Strangely, the homeopathic community on the whole seem equally unimpressed. Either from ignorance and fear of science and its methods, or just complacency, many homeopaths still see little purpose in attempting to ‘prove’ something they already ‘know’ works.

However, these latest admittedly still controversial results from science raise an interesting question. Could it be that as Hahnemann was discovering homeopathy at the end of the 18th and the beginning of the 19th centuries, European scientists like Amadeo Avogadro at the same time were speculating about molecules and proceeding up a two- hundred year long blind alley? This in no way is meant to imply that molecules do not exist: that would be akin to claiming in the face of overwhelming evidence that the world is flat. But perhaps the importance of molecules has over time been exaggerated at the expense of the solvent in which they are dissolved. In other words, it is the relationships between molecules rather than the molecules themselves that is important. Benveniste’s later experiments certainly point in that direction, and it is an inescapable conclusion derived from a more thorough reading of quantum mechanics and its consequences.

So far we have considered the possibility that ultra-diluted substances of themselves could produce measurable effects, including therapeutic ones. However, though fascinating, this work could have the effect, as in conventional biomedicine, of confining attention to the remedy as the local therapeutic agent, at the expense of the equally important non- local dynamics of the patient- practitioner relationship. Of course, the latter is far more difficult to quantify and make amenable to deterministic and reductionist methods of enquiry as used in conventional biomedicine. Nevertheless, there are some arguing for just such a shift of emphasis away from a purely biomedical approach to homeopathy. Walach has developed a non-local model of homeopathy based on Jungian synchronicity and semiotics’, while I have begun investigating similar non-local metaphors for homeopathy based in quantum theory. There are also fascinating contributions to the understanding of the healing process coming from practitioners of complexity theory. Here, the ancient concept of Vital Force achieves some level of resurrection in modern terms by being seen as an emergent property of billions of living cells. This totality generates its own self-sustaining ‘field’, that is not localised in any one cell, organ, body part, or consciousness, and is capable of resisting local entropic dissipation. And within the larger domain of complementary medicine and even in conventional medicine itself, a move away from purely biomedical explanations of therapeutics and attempts to grapple with the underlying importance of the placebo effect and the patient-practitioner relationship have been in existence for some time.

Oddly, this has been mirrored recently in what might be thought of as the somewhat bizarre prescribing habits of certain homeopaths. This interesting group (who arguably could be thought of as the direct descendants of Hahnemann’s pioneering methods) have either ceased giving remedies altogether or use placebos. They argue that it is the nurturing of the patient-practitioner relationship and the intentionality of the practitioner that are all- important. Though their anecdotal work must be treated with some caution, they report the same success with patients whether remedies are used or not. All of which throws into question just what importance the homeopathic remedy actually has.

The picture that seems to be emerging, especially from the non- local and complexity theory approaches mentioned earlier, is that perhaps the remedy is important on two counts, viz, locally, within itself and non-locally, within the context of the patient-practitioner relationship. This view was partly anticipated by Kent over 100 years ago when he argued that a remedy could only be considered homeopathic when it has cured the case* In other words, an unprescribed bottle of a remedy pills sitting on a shelf is not homeopathic. It becomes so when it entangles with the practitioner who prescribes it, and the patient who takes it.

It could be argued that such an interaction smacks of an appeal to ‘magic’. However, looked at from the algebraic point of view used in quantum theory, such patient- practitioner-remedy (PPR) entanglement, just like EPR entanglement mentioned previously (i.e., non-locality) can be considered as a non-commuting relationship. This means that the sequence of operations performed on the elements of such a non-commuting relationship, are vitally important to the operational outcome. Mathematically, this is equivalent to saying that A times B is NOT the same as B times A. If this seems bizarre, then consider the sequence of operations in making a cake. Clearly, when the baking of the cake occurs in the sequence of cake making, is vitally important to the outcome of whether an edible delicacy or a charred abomination is produced. Baking can only occur AFTER everything has been mixed and not before. In homoeopathic terms, non- commutation means that it also matters in which sequence the pieces of the homeopathic puzzle are put together, e.g., a remedy cannot be sensibly prescribed until there is a patient whose case has been taken.

These are the ground rules, if you like, for PPR entanglement to occur and as they are similar to those used to describe non-locality in quantum systems, it seems reasonable to suggest that the process of homeopathy might be amenable to a quantum theoretical interpretation. This is beginning to yield some preliminary but interesting results. Thus, one of the quantum metaphors being developed (based on molecular orbital theory as used by chemists) suggests that as a homeopathic remedy becomes more highly potentised, it should have a deeper action on the patient – a prediction seemingly born out in homeopathic practice.

In trying to pull together the various threads discussed in this article, it appears that a sensible (but, as ever, provisional) set of conclusions to draw so far from the debate about the therapeutic action of homeopathy is that: –

  • Remedies appear to have a local action which may be;
  • Entangled within the inherent non- locality of the patient-practitioner relationship, and;
  • There is more, much more to be discovered about that most common fluid on the planet, i.e., water.

All of which means that homeopaths need to become at least acquainted with the new results coming out of science. Admittedly, these results seem to be somewhat confusing – on the one hand suggesting that perhaps potentised remedies in their own right do have observable effects on people and substrates, while on the other these effects seem somehow to be entangled within the context of their prescription. However, considering the way science, particularly physics, has progressed over the last century, such confusion should not be too surprising. The loss of certainty and the realisation of complementarity between pairs of observable phenomena (e.g., position and momentum; energy and time) that came about with the rise of quantum theory is only just beginning to be felt outside the narrow realms of particle physics. In fact, there are those beginning to examine the theoretical consequences of quantum theory, e.g., non-locality, complementarity, and entanglement, in everyday macroscopic phenomena.

If non-locality is difficult for physicists to understand in their sub- microscopic world, how much more difficult must it be for us mere mortals to understand in ours? So, it could well be that by considering non-local patient-practitioner interactions as well local effects of remedies, homeopathy might begin to make sense within a scientific framework. In addition, the real meaning and profundity of the placebo effect in all branches of medicine be they conventional or complementary, might at last begin to unfold.

Lionel Milgrom is an academic research scientist and co-founder/MD of a university-based biotech spin-out company. At evenings and weekends, he is a fully qualified, professional homeopath in London and practices from home and at a local clinic. He is also a free-lance science writer.