Science - The Heart of Modern Medicine
The COVID-19 pandemic had shaken the world
in many ways than could be imagined. For many, the pandemic brought into their
lives a wave of uncertainty, fear and distress. Several precious lives were
lost and many were left with either poor health or long-term financial
difficulties. In that gloomy scenario, the one thing that cannot be ignored is
the glimmer of hope that shone bright through the relentless efforts of
scientists and doctors who pursued solutions following the best of scientific
methods. It is only to the pursuance of science that COVID-19 infection
treatment strategies and finally the much-awaited vaccines were developed,
bringing the pandemic under control. Epidemics and pandemics like the bubonic
plague, eponymously termed 'Black Death' as it killed nearly half of the
population of medieval Europe, the 1918 Spanish flu which killed 50 million
people worldwide, the ravages of small pox disease, are only some examples of
immense suffering and loss of life, that were common in the pre-scientific era.
Contemporary painting of Marsielle during the Great Plague (1721) By Michel Serre - Own work, Rvalette, Public Domain, https://commons.wikimedia.org/w/index.php?curid=56832600. |
Milan during the plague of 1630: plague carts carry the dead for burial By Melchiorre Gherardini - G. Nicodemi, ‘Un curioso documento iconografico della peste del 1630 a Milano’, Archivio storico lombardo, 5th series 9 (1922), 361–63 (p. 362)., Public Domain, https://commons.wikimedia.org/w/index.php?curid=15848648 |
Of late, several mis-understandings
seem to be prevalent about Modern Medicine. From the commonest accusation of
medicines having multiple side effects, to labelling Modern Medicine as Western
medicine or Allopathy and looking at it as just one of the many other prevalent
systems of medicine, to the wilder accusations of medicines being pushed by
multinational pharmaceutical companies by inventing new diseases only to
increase their profits. Addressing every allegation is beyond the scope of this
article, but I will try to make clear at least a few of the basic things here.
Evolution of Modern Medicine
Traditional systems of medicine derive
their validity from being considered time-tested, or by being based on certain
ancient texts, or on the authority of an experienced former or current
practitioner, and sometimes even by invoking mystical inspirations. Usually
these are backed up by theories congruent with a certain world view. Although
better than theories blaming diseases on wrath of supernatural entities for
personal acts of omission and commission, or due to black magic or evil eyes,
and prescribing supplications, rituals, or sacrifices as remedies, the practice
of traditional systems, as also of complementary and alternative medicine
systems (CAM), still relies much on the subjective interpretations of a
physician, sans any definite independently verifiable evidence. Allopathy, the
word often used to denote Modern Medicine, was coined by a German physician
C.F. Samuel Hahnemann in 1800s, ironically, in a pejorative manner. Allopathy,
as per him, means using remedies that produce effects opposite to those caused
by disease (allos = opposite, pathos = suffering), without
addressing the root cause of disease and preventive measures. He was
particularly against conventional medical practitioners of his time using
treatments such as strong elixirs of chemical compounds, bloodletting,
leeching, purging, induced vomiting, grotesque amputations etc., causing more
suffering than the disease itself. In opposition, Hahnemann founded Homeopathy,
based on the principle of 'like cures like' (homeos = equal), wherein
infinitesimally diluted portions of a substance capable of producing symptoms
similar to the disease in healthy individuals is administered to patients as
part of treatment, along with lifestyle and dietary changes, and positioned it
as a gentle and holistic medical system.
Homeopathy looks at the horrors of Allopathy (1857) By Alexander Beydeman - Alexander Beydeman, Public Domain, |
But today, the system of medicine labelled
as allopathy, owes much to the advancements in science of 19th and 20th
centuries. It involves in addition to a clinicians' experience, skills and
judgement, knowledge gained from multiple scientific disciplines along with
rigorous analysis and testing, to either backup or refute prevailing practices,
and also to devise new treatment methods, drugs and preventive strategies. The
fields of human anatomy, physiology, microbiology, pathology and preventive medicine
had started taking roots even during Hahnemann’s time, and only flourished
thereafter along with the discovery and elucidation of laws of physics and
chemistry by the great scientists of that era. Hence, it is called as 'Modern
Medicine'. In line with science, scepticism is the rule. Individual patient
testimonials, miraculous healing stories, or a practitioners' claim of a new
treatment or cure, even if he is an experienced and authoritative figure in the
field, are not taken on face value. Theories of causation of a disease and
treatments are accepted only when they are proven.
Clinical Studies
In addition to pure science research
mentioned above, one another particular branch that is pivotal in generating
this necessary evidence base for the practice of modern medicine is
Epidemiology, involving the now frequently heard 'Clinical Studies'. A broad
understanding of these studies would therefore be useful for us to make sense
of the practice of modern medicine. A wide variety of studies are used now but
they are mainly divided into Observational and Experimental studies.
Observational studies are again divided into Descriptive studies and Analytical
studies.
Descriptive studies
In a descriptive study, a disease is
studied in its natural course with respect to time, place and person in a
defined population. It helps in establishing a hypothesis about the plausible
causative factors of a disease in question. It is especially useful when
nothing much is known about its risk factors and natural course. To illustrate
a case in point, it was by a spot map of cholera cases in the Golden Square
district of London which showed clustering of cases around a common water pump
in Broad Street, that a physician, John Snow, hypothesized in 1854 that cholera
was a water-borne infectious disease, as opposed to the prevalent theory of it
being due to bad air emanating from river Thames. Based on this insight, local
officials ensured that only clean drinking water was supplied, and it greatly
controlled cholera epidemics in London district, much before the discovery
of Vibrio cholerae bacteria as the infectious agent.
Similarly, in the initial days, AIDS epidemic was hypothesized to be an
infection transmitted by blood products and sexual contact when it was found to
be clustered among those requiring frequent blood transfusions like
thalassaemic and haemophiliac patients, intravenous drug abusers, and in
persons having unprotected sex with multiple partners. The risk factors for
many life-style related diseases such as obesity, chronic heart diseases, and
cancers were initially hypothesized from similar case-reports, case-series and
descriptive studies.
Original spot map by John Snow (Black dots are cholera cases,) – Wikimedia commons |
Analytical studies
Later, these hypotheses are tested using an
analytical study. Case-Control and Cohort studies are two major types of
analytical studies. In a case-control study, individuals are divided into those
having the disease (cases) and those not having the disease (controls), and
enquired about their exposures to plausible risk factors to look for an
association. On the other hand, in a Cohort study, a defined group of people
with similar characteristics and known to be exposed, or harbouring the
plausible risk factors, are followed up over a period of time, to look for the
emergence of the health outcome in question. Statistical analyses such as Odd's
ratio, Relative Risk, Attributable Risk etc., are employed to determine the
strength of association between the risk factor and the health outcome. It was
by using such case-control and cohort studies that Richard Doll and A. Bradford
Hill, scientists at Medical Research Council, United Kingdom, who were
sceptical of the initial observations of physicians linking tobacco smoking to
the rising cases of lung cancer, and also smokers themselves, for the first
time were able to conclusively demonstrate a strong association between the
two. For their case-control studies they collected exposure histories and
detailed smoking habits from patients of lung cancer across hospitals in London
which formed the 'cases', and the same history from patients of similar age and
sex, but who did not have lung cancer, forming 'controls'. Upon analysing, it
was found that the odds of male smokers developing lung cancer was 14 times
higher as compared to male non-smokers, and this increased with the number of
cigarettes smoked per day and the duration of smoking. To strengthen their
findings, they commissioned a cohort study in which they sent a questionnaire
to all registered British doctors of that time (forming a cohort), enquiring
about their smoking habits, and followed them up. In their preliminary findings
published in 1954 after 29 months of follow-up, they found that smokers were
much more likely to die due to lung cancer and also due to coronary thrombosis
(heart attack), thus adding more credibility to their earlier findings.
Subsequently, more analytical studies were done by others and now it is well
established that smoking can increase the risk of lung cancer by 10 to 30 times
as compared to non-smokers. Another classic example of Cohort study is the
Framingham Heart Study. It was commissioned in 1948 to determine specific risk
factors for the rapidly rising cases of coronary heart disease (CHD) in
American population. A total of 5209 men and women in the age group of 30 to 62
years were recruited in the town of Framingham, Massachusetts and extensive
physical examinations and lifestyle interviews were conducted regularly. They
were then followed up to look for emergence of CHD. From this study, high blood
pressure and high cholesterol were determined as important risk factors for the
development of CHD. The study was continued for many years further, recruiting
many more individuals, and has yielded several significant insights into
chronic non-communicable diseases. Cohort studies also helped establish a
strong association between ambient air pollution, lung cancer and
cardiovascular mortality (CPS-II study). Cohorts of Japanese atomic bomb survivors
(Life Span Study) yielded important information on the long-term effects of
radiation from nuclear attacks. Birth cohort studies, involving long term
follow up of children born at a particular time (E.g.; Millenium cohort study,
Melbourne Asthma Study), give us an insight of the effects of changing
lifestyles and food habits or other novel factors on health, and are now
routinely conducted. It should be observed that in both descriptive and
analytical studies, no active attempt is made by the researcher to intervene or
modify the disease course. Subjects can continue to seek medical interventions,
as necessary, on their own, and these are only recorded by the researchers as
part of the study. To summarize, a descriptive study is the collation of individual
observations of a trend and framing a hypothesis, and analytical study is a
method of determining the strength of association of specific factors
hypothesized to be responsible for that trend, with the help of bio-statistics.
Framework
of a case-control study
Suspected risk factors |
Cases (Disease present) |
Control (Disease absent) |
Present |
A |
b |
Absent |
C |
d |
|
a + c |
b + d |
Odd’s
ratio (risk rate) = ad/bc, can be derived.
Framework
of a cohort study
Cohort |
Disease |
Total |
|
Yes |
No |
||
Exposed to putative aetiological factor |
a |
b |
a + b |
Not exposed to putative aetiological factor |
c |
d |
c + d |
Relative
Risk (RR) and Attributable Risk (AR) are calculated.
Randomized Controlled Trials (RCTs)
But only an association of a risk factor
with a disease may not be enough to establish causation. In addition,
analytical studies are prone to biases, even after taking due care, and also
cannot help understand the reliability of a new diagnostic method, or test the
effectiveness or otherwise of any treatment methodology or surgical procedure.
For these problems, an experimental study is undertaken. These experimental
studies are known as Randomized Controlled Trials (RCTs), and are now
considered gold standard for generating largely unbiased evidence. There is no
denying that a lot of pure science research involving physiology, pathology,
microbiology, genetics, genomics etc., is still required to discover specific
pathogens and to understand the complex mechanisms involved in disease
causation on exposure of susceptible host to the particular risk factors and
pathogens. But clinical studies are useful to direct such research in the right
way, and also for planning important public health interventions.
A Randomized Controlled Trial (RCT) in
simple language is a controlled experiment on consenting subjects, involving
multiple steps, but with randomization and blinding as the backbone. It starts
with writing down a protocol of the study to be carried out, detailing the
research question, target population to be studied, study objectives,
statistical methods of study, inclusion and exclusion criteria, review of
existing scientific literature on the subject, and necessary forms that will be
filled for collection of data. Once a protocol is written down and approved by
the institutional scientific and ethics committees, it needs to be strictly
adhered to till the end, for the results of the study to be acceptable.
Randomization is a statistical process by which subjects are assigned either to the intervention arm or the control arm, ensuring that all baseline characteristics such as age, sex, ethnicity etc., are equally distributed in both groups and every participant in the study population stands an equal chance of being assigned to either of the arms, thereby reducing biases. In this process, when all other variables tally, the only variable left to study will be the planned intervention, thus allowing us to infer a direct cause-effect relationship. Blinding is the process by which either the participant (single blinding) or both the participant and the doctor are unaware of the allocated group and the treatment received (double blinding). Blinding is important to prevent manipulation of results by either the participant or the treating doctor. The planned intervention is then carried out by the investigators on the experimental group, with either a placebo or the prevailing standard treatment given to the control group. After statistical analysis, the intervention is reported as either inferior, non-inferior and superior, as compared to the control. The testing of new drug streptomycin for patients of serious forms of tuberculosis (extensive pulmonary, meningeal and miliary forms) was the pioneering RCT conducted in 1948 by Medical Research Council, UK. Bed rest was the prevailing standard form of treatment at that time for serious forms of tuberculosis, but success rate was very low and mortality high. Patients in the age group of 15 to 30 years were enrolled in the study. After the process of randomization, the intervention arm received streptomycin injections along with bed rest, whereas the control arm was given only bed rest. The MRC investigators ensured that neither the patients, nor the treating doctors were aware of who was receiving streptomycin and who was not (double blinding), to obviate false claims of improvement or adverse effects by both the parties. The method was justified because of lack of any other effective treatment for serious forms of TB and also as streptomycin had already proved effective on lab cultures and in animal studies. Patients in the intervention arm showed improvement over a period of six months with only 4 deaths out of 55 patients, as compared to 15 out of 52 in the control arm. But, unfortunately, in the next six months, the differences started to diminish and both arms showed deterioration. This eventually paved way to understanding acquired resistance to antibiotics by microbes through mutations, and also for the concept of treating tuberculosis using multiple drugs. Further RCTs using multiple drugs for TB proved effective and is still the standard with high cure rates. RCTs of the present time are regulated by International Harmonization of Good Clinical Practice guidelines (ICH GCP) and local governmental laws, requiring the provision of detailed Informed Consent Forms (ICF) to the study participants by the investigators, along with the right to withdrawal from study anytime, and compensations for any untoward eventualities arising out of the study. For many decades now RCTs have been the definitive force in refining the practice of Medicine, and are now indispensable. RCTs have helped us in understanding the role of aspirin and anti-platelets in preventing heart attacks, moving away from radical mastectomy surgeries for breast cancer to more breast conserving surgeries (NSABP B-04 & B-06 studies, NEJM, 2002), the role of diet in controlling blood pressure (the DASH trial), to understanding the harmful effects of intensive glucose control (ACCORD trial), to just name a few.
Drug trials
The approval of new drugs and vaccines also
goes through a rigorous process involving five stages, supervised at every step
by government and independent regulators of a country, and also sometimes by
international health agencies. In USA, The Food and Drug Administration (FDA)
is responsible, and in India, the Central Drugs Standard Control Organization
(CDSCO) headed by Drug Controller General of India (DCGI) is the equivalent.
The first stage includes pre-clinical studies done on laboratory animals, followed
by Phase 1, where the dynamics of the drug in body and its safety is tested in
a small group of healthy volunteers. At this stage, the dose of drug which
produces the necessary effect in body without causing major side effects is
calculated. Thereafter, Phase 2 and Phase 3 includes testing of the drug on
selected patients, after taking informed consent and following ethical and
legal procedures. In Phase 2, the number of patients selected is small and is
mainly focused on dosage and safety. Based on the results of phase 2, greater
number of patients are selected for phase 3, and is designed as a Randomized
Controlled Trial involving an intervention (the new drug) and a control arm
(placebo or current standard drug). Further testing and release into the market
is stopped, if any concerns are seen during any of the stages. If found to be
reasonably safe, the results are published in peer reviewed scientific
journals, and the drug is then released into the market with approval from
regulators, along with complete details of dosage, indications,
contraindications, and possible side effects. This is followed by final Phase 4
post-marketing surveillance, to look for long term adverse effects related to
the drug, if any. When any long term serious adverse effects are noticed, the
drug is withdrawn from the market, and the same is communicated to
practitioners and general public.
Levels of evidence
Nowadays there are innumerable trials that
are conducted world over, for almost every single clinical question possible.
But it is to be understood that not all trials are feasible in every scenario
owing to ethical, logistic and financial constraints. For example, it would not
be ethically acceptable to study the effects of smoking, by conducting a RCT
wherein people in the intervention arm are asked to smoke cigarettes. In such
scenarios, we have to rely on the strength of association of risk factors with
a health outcome generated from case-control and cohort studies. Also, RCTs and
drug trials need a lot of planning, interdisciplinary co-ordination and can be
quite expensive. Nevertheless, where feasible, RCTs remain the gold standard.
Taking all these types of studies, the evidence behind a treatment strategy,
new drug, surgical technique, diet plan etc., is now graded in a hierarchy,
figuratively represented by the Evidence Pyramid.
A systematic review is the statistical analysis of all the data generated from multiple RCTs, case-control and cohort studies, on a particular clinical question. A meta-analysis is the analysis of many such systematic reviews, and is graded as the strongest level of evidence. Both these strategies make a thorough analysis of the individual clinical studies, right from the protocols, statistical methods used, sampling techniques, number of patients enrolled for study, and final consistency and validity of the data generated, thereby determining the quality of the study. Those studies which do not meet the required criteria are excluded from analysis. Studies that are conducted on larger samples, and that which are representative of diverse racial and ethnic groups, are considered to be of higher quality. Independent medical societies in each specialty and sub-specialty, as also government bodies, make use of these studies and levels of evidence to make clinical practice guidelines from time to time, for perusal of modern medicine practitioners. The term 'Evidence Based Medicine' (EBM) has come into use from 1990s popularised by Gordon Guyatt, to describe this method of practicing medicine.
The COVID-19 pandemic and the hunt
After the identification of novel
coronavirus, SARS COV-2, as the causative agent of cluster of severe pneumonia
cases with human-to-human transmission in Wuhan, China, and WHOs declaration of
COVID-19 as pandemic, a worldwide hunt for treatment strategies and vaccines
started. Remdesivir, an anti-viral drug which had inhibited the replication of
SARS COV-1 and MERS viruses in the laboratory was identified early in the
pandemic as a possible agent against SARS COV-2. The drug had also shown
activity against SARS COV-2 in the laboratory. As part of the ACTT-1
multinational, double-blind RCT conducted on 1062 hospitalized severe COVID-19
patients initiated in February 2020, 541 patients were assigned to the
Remdesivir arm and 521 patients to the placebo arm, through randomization. Both
arms continued to receive usual care. The trial was completed in April 2020.
Analysis of trial data showed that patients in Remdesivir arm had a median
duration of recovery of 10 days as compared to 15 days in placebo arm, mortality
was 6.7% with Remdesivir and 11.9% with placebo by day 15, and 11.4% with
Remdesivir and 15.2% with placebo by day 29, thereby concluding that Remdesivir
was superior to placebo for hospitalized patients of COVID-19. FDA approved the
use of Remdesivir for COVID-19 in December 2020. World Health Organization
(WHO) also initiated a large multinational RCT named Solidarity trial in March
2020 to test repurposed antiviral drugs, remdesivir, hydroxychloroquine,
lopinavir and interferon beta-1a on 11,330 hospitalised COVID-19 patients.
Interim trial reports were released in February 2021. No meaningful outcomes
were reported on use of hydroxychloroquine, lopinavir and interferon beta-1a
drugs, and they were dropped out of trial mid-way. Solidarity did not show a
mortality benefit for Remdesivir either, but guidelines retained the use of
remdesivir as other studies showed that it reduced the time to recovery and
hospital discharge in survivors, as compared to placebo. RECOVERY was another
multi-site RCT conducted early in the pandemic across National Health Service
(NHS) hospitals in UK, to test potential treatments for hospitalized COVID-19
patients who required either supplementary oxygen or mechanical ventilatory
support. A total of 11,303 patients were enrolled in the study and randomized
to receive either of the potential drugs, or usual care. The drugs included
dexamethasone (steroid), hydroxychloroquine, lopinavir-ritonavir, azithromycin,
convalescent plasma and tocilizumab. The interim trial results for dexamethasone
were out by June 2020. Other drugs were dropped from study for showing no
benefit. A total of 2104 patients were randomized to receive dexamethasone and
4321 to receive usual care. The trial data analysis showed that the use of
dexamethasone resulted in lower 28-day mortality among those who were receiving
either supplemental oxygen or mechanical ventilatory support, as compared to
usual care. COVID-19 vaccines were are also run through all three drug trial
phases and RCTs before their release for general public. Pfizer-BioNTech
COVID-19 mRNA vaccine (BNT162b2) candidate trial began in July 2020 across
multiple countries. Healthy volunteers of 16 years and older age were
randomized to receive either the trial vaccine or placebo, and followed up to
look for primary efficacy endpoint, defined as confirmed COVID-19 infection
occurrence 7 days after the second dose, and also for safety. A total of 43,548
individuals with equal male, female distribution and representative of diverse
racial groups, were enrolled for the study till January 2021. Early results
showed 95% efficacy in preventing COVID-19 infection with good safety profile
and therefore received clearance by UK regulatory agency and FDA for emergency
use in December 2020. A six-months follow up study of the vaccine showed a
gradual decline in efficacy, but it still offered 96.7% protection against
severe COVID-19 disease. The most outstanding feature of the trials conducted
during COVID-19 was the swiftness with which they were planned, patients
enrolled and data released, without compromising on quality and safety. Part of
it was because of the huge number of patients available for trials due to the
rapid spread of the pandemic, as also due to digitization of medical records,
quick bureaucratic clearances due to public pressure on governments, and
clearly the dedication of researchers involved. But the major reason for the
swift conduct of the studies that can-not be overlooked is the experience
gained from conducting numerous randomized controlled trials for many decades
now.
Modern medicine in the age of mis-information, pseudo-science, anti-science and the future
Curiosity and scepticism are essential for
progress of science. At the same time science is also a discipline which means
only having curiosity and raising doubts, and not following scrupulous
scientific methodology to find answers, will lead to erroneous conclusions.
By Efbrazil - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=102392470 |
Unfortunately, in the present times, it is
not uncommon to find people short-circuiting the well laid-out scientific
methodology, but nevertheless claiming their erroneous conclusions as
scientifically true, for either supporting their own beliefs and agendas, or
for monetary profits, or even for gaining instant fame. This is termed as
pseudo-science. Oxford reference defines pseudo-science as “theories, ideas, or
explanations that are represented as scientific but that are not derived from
science or the scientific method…is usually nothing more than a fact, belief or
opinion that is falsely presented as a valid scientific theory or fact…’’.
Anti-science is the more dangerous trend that is fast catching up across the
world. It is the deliberate and complete rejection of science and scientific
methods, replacing them by dubious theories of their own making. So, while
pseudo-science makes a pretence of following science, anti-science completely
drops that pretence, but often for serving purposes similar to pseudo-science.
The widespread availability of internet and social media has only fuelled this
mis-information malady. Both these trends, although not new and confined to any
one country, were visible in the anti-vaxxers, no mask and no social-distancing
protestors in the United States during the COVID-19 pandemic. In India too,
several pseudo-scientific prevention and treatment strategies exploded across
the internet and media, in the wake of COVID-19, jeopardizing the lives of
gullible people.
It can be argued here that there is a scope
for manipulation of even scientific studies at every level, by either profit
seeking companies, or under pressure researchers, or even governments for
self-fulfilling purposes. Modern history definitely has enough such dark
examples. But review of studies by experts of the field to check for
authenticity before their publication, termed as peer review, and different
researchers getting similar results following the same protocol, known as
reproducibility, are few of the in-built mechanisms which ensures that such
misdemeanours are eventually exposed. An important standard of scientific
hypothesis and theories, as given by Karl Popper, is the principle of
falsifiability, which means they should be open to be proven wrong in the light
of new observations and evidence. Classic examples are of Newton’s theory of
gravity getting superseded by Einstein’s theory of relativity, and circular
orbits of planets giving way to elliptical orbits. In medicine too, several
treatment methods have been refuted in the light of evidence from better
studies. Pseudo-scientific claims, on the other hand, are hostile to
questioning, testing or counter evidence. They are usually backed by anecdotal
evidence, or by random findings from obscure science journals, are full of
generalizations and vague jargons, and presented as peerless truths in
emotionally stirring or sensational formats. Science is open for scrutiny and
looks for betterment of existing knowledge and theories, whereas pseudo-science
and anti-science seek conformity. Similarly, as opposed to scientific medicine,
proponents of pseudo-scientific medicine give tall assurances and guaranties of
curing a particular ailment, bordering on miracles. No information is shared
about the potential harms of the treatment modality. Often times, such
treatments and diet plans are made by people projecting themselves as promoters
of traditional systems of medicine, or by celebrities, influencers and
self-proclaimed practitioners with no credentials or acceptable training in
health sciences whatsoever. Instead of sharing the information on accepted
scientific platforms, pseudo-science information is shared as advertisements in
print or television media, or now more frequently on social media, and almost
always with a rider that there is a conspiracy afloat to supress their findings
by the regular scientific community. The perceived effects seen by the patient
on using those medicines is often because the disease was self-limiting to
start with, like in the case of common colds and seasonal viral fevers, or
because of the well-known placebo effect - they work because the patient
believes and wants the medicine to work. That is also the reason why in a
randomized controlled trial a new drug is compared with placebo as control, and
the study is blinded. Although placebo effect may work for certain illnesses
like stress induced non-specific pains, more worryingly, like those which
promise to reverse and completely cure chronic diseases and cancers, do nothing
more than give a false hope to patients and family members and delay the use of
more effective modern medicine treatments, while the underlying ailment
continues to grow. Compositions of certain pseudo-scientific preparations are
not disclosed by their proponents, but have been found to contain several known
toxins inducing an array of damaging side effects, which are neither
communicated to the patients nor recorded. Pseudo-scientific practices also
thrive where modern medicine supposedly doesn’t offer cure, such as for
metastatic cancers and certain neuro-psychiatric disorders. While modern
medicine honestly acknowledges its inability to cure certain conditions at
present, those wishing to try newer treatments in the hope of recovery are
given an informed option of enrolment in randomized clinical trials, which also
helps further science for betterment of humanity. Otherwise, they are directed
towards rehabilitative, palliative and hospice care, to ease their suffering.
It is clearly unreasonable to expect lay people to go through all studies to
assess the strengths and weaknesses before choosing between different
therapies, but cross-checking of information from multiple sources, looking for
the qualifications and expertise of the person sharing the information, having
a scientific temperament and understanding the principles of scientific method
as elucidated above can help keep ourselves away from the harms of
mis-information.
The future of modern medicine is only
poised to grow with the advancements in science and technology. Now with
information from molecular mechanisms underlying diseases, genetic make-up of
individuals and their complex interplay with lifestyle and environment,
medicine is moving from ‘one size fits all’ approach to more individualized
treatments, termed as precision medicine. As more data becomes available,
therapies may become more targeted. Artificial Intelligence (AI) is being seen
as the next big frontier in human progress and the practice of medicine. With
its capabilities of making sense of vast amounts of data and now even human
language, diagnostics are touted to become more accurate. Already FDA has
approved several AI-Machine Learning (AI-ML) tools for diagnosis of specific
conditions and many more are in the pipeline. Generative AI models like ChatGPT
maybe used to reduce the time-consuming paperwork of physicians and nurses.
Along with that, the conduct of clinical trials is also predicted to be AI
planned and directed, thereby reducing costs, time and biases. The fields of
genetics and genomics are also going to see a transformation through the use of
AI-ML tools for data analysis, possibly leading to true precision medicine.
Certain short-comings of modern medicine
put forth by its constructive critics are its impersonal approach to treatment
and over-mechanization. Physician, patient interaction time has reduced
overall. The value of conversing with patients and knowing them as a person,
the importance of ‘healing touch of the physician’ as part of the physical
exam, are qualities overshadowed in the heat of getting the diagnosis right by
ever expanding repertoire of advanced investigations. Multiplication of trials
has led to dishing out of standardized treatments, overlooking individual
patient needs. These are recognized failings by
many even in modern medicine circles and corrective measures are required. Only time will tell whether newer technologies like AI will help
bring the human side of medicine back or will it end up completely changing the
practice of medicine as we know it. But moving away from the clearly laid out
scientific methods of modern medicine and the immense knowledge gained over
many years of research can only be at our own peril.
“Perhaps in the future we can imagine a
doctor who doesn’t have to take a careful history, feel the contours of your
pulse, ask questions about your ancestors, inquire about your recent trip to a
new planetary system, or watch the rhythm of your gait as you walk out of the
office. Perhaps all the uncertain, unquantifiable, inchoate priors–inferences,
as I’ve called them loosely-will become obsolete. But by then, medicine will
have changed. We will be orbiting some new world, and we’ll have to learn new laws
for medicine.” – Siddhartha Mukherjee in ‘The Laws of Medicine’.
Comments
Post a Comment