Search This Blog

Tuesday, 8 October 2019



How does Galton compare with Adolphe Quetelet, Florence Nightingale and John Snow, all truly magnificent statistical pioneers of the nineteenth century, and with Augustus De Morgan, the leading Bayesian, who was a professor of mathematics at London University and then University College London? Should he be similarly feted, despite his very harmful contributions to eugenics?
Yes!! but primarily because of his discovery at age 65 of "regression towards the mean::

Sir Francis Galton was a high class' operator': a highly political class-conscious, racist. ableist eugenicist, a highly inventive polymath. able mathematician, and a prolific but not completely original statistician, who used his ill-gotten family wealth to finance much of his success, The source of his scientific expertise isn't completely obvious, given his educational background, and his attitudes towards women were a bit strange and indeed quite appalling

    Galton's 1869 book Hereditary Genius contains some dubiously analysed data from intelligence tests,  The book he published in 1883 was largely very inventive  pseudo-scientific rants. Indeed, his ideas/ravings on eugenics were not taken seriously at the time because of lack of data , He began to acquire more scientific respectability at the age of 62 when he opened an Anthropometric Laboratory, in London but  his ideas on eugenics weren't taken seriously in Britain until the aftermath of the Boer War, when he was about eighty, and after Karl Pearson had analysed lots of data on the subject..Galton wasn't knighted until 1909.

 While Galton popularised regression, this had been discovered much earlier by Legendre and Gauss. While he had a fetish for the normal distribution, which he thus renamed, this had been discovered by Gauss and Laplace, His developments of the properties of the bivariate normal distribution follow the work of Gauss. His Bayesian inverse probability follows Laplace and De Morgan, It isn't obvious whether he discovered the conjugate Bayesian analysis for the normal distribution, but he certainly knew about ii. He also invented a device to simulate a Bayesian prior to posterior analysis, which mimicked modern day acceptance sampling. 

Galton's work during the 1880s on psychometrics and mental testing involved many misapplications of the thin-tailed normal distribution which lead to far too many people being judged inferior or feebleminded or mentally deficient (e.g.later  by Sir Cyril Burt and Lionel Penrose) because of their supposed low (white middle class) intelligence,

For a more than favourable account of Galton's statistical contributions, see Darwin, Galton, and the Statisiical Enlightenment by Stephen Stigler  Professor Stigler (personal communication) says that he regards Galton's contributions to Eugenics in much better light than I do, I have indeed detected signs of a 'Galton cult' in the Department of Statistics of the traditionally highly Capitalist and colonialist University of Chicago. Steve is an Emeritus Distinguished Professor there, and a highly regarded and technically meticulous statistical historian with his own political slants,


Galton's bean board


1822 Born in Sparbrook, BIrmingham. Family (Quakers!) involved in arms trade, slave trade, and related banking activities

1837-39 Pupil at Birmingham General Hospital and King's College London Medical School, Left King Edward's School Birmingham because of narrow curriculum despite showing outstanding intellectual promise,

1840-44 Studied Mathematics at Trinity College, Cambridge. Pass degree because of nervous breakdown. Did not study for doctorate,

1844 Joined Freemasons in Cambridge

1844 Immense legacy from father which was to finance all of his future exploits, and which left him without any need of employment by others e.g. he never held an academic position

Mid 1840s  Visited Egypt and the Sudan

1850  Explored South-West Africa. Flawed study of intelligence of Africans after measuring
          women's backsides.(The UCL librarian Subhartra Das has more information about Galton's attitude towards women.Later on he apparently stalked women in London to take statistical measures of their attractiveness).

1853 Published Tropical South Africa, Awarded gold medal of Royal Geographical Society.
         Published Art of Travel (reprinted in 1974)


1869 Published Hereditary Genius

1873: Wrote infamous letter to Times, a psychopathic racist and eugenicist rant about the inhabitants of Africa

1873 Invented 'bean machine' to demonstrate Central Limit Theorem

1874 Published English Men of Science: Their Nature and Nurture

1875 He and Rev H.W. Watson described the dishonestly named Galton-Watson branching process in a paper in the Journal of the Royal Anthropological Institute entitled 'On the Probability of the History of Families'
(I.J. Bieyname derived the key criticality theorem 28 years before it was rediscovered in incomplete form by Galton and Watson)


1883 Published Inquiries into Human Faculty and its Development, in which he coined the term 'Eugenics'.

1884 Established his own Anthropometric Laboratory in London, later in Natural Science Museum in South Kensington.

1885 Presented pivotal paper (originality unclear to me) on multivariate analysis to Royal Anthropological Institute

1886: Published 'Regression towards Mediocrity in Hereditary Stature' (Regression towards mean)  in the Journal of the Anthropological Institute of Britain and Ireland
****Doubtlessly his greatest work, for which he will be forever remembered.****

1888 Claimed to discover correlation, first published by Auguste Bravais in 1844

1889 Published Natural Inheritance

1889 Psychometrics Institute founded in Cambridge following Galton's earlier ideas.

1892 Published Fingerprints,Note that fingerprints had previously been developed by Sir William Herchel in India during the 1850s for identification purposes, but Galton had recorded about 8000 observations in 1882, and Scotland Yard adopted the method.

1899 He and Karl Pearson met in London with the American zoologist Charles Davenport, who, thus inspired, subsequently founded the American Eugenics Records Office

1904 Delivered highly inflammatory racist lecture to Sociological Society, Attended by German racial hygienist Alfred Ploetz, immediately prior to the German genocides of the indigenous tribes of Namibia,

1904 Founded Galton Laboratory at University College London, at his own expense, for research into Eugenics,

1907 Co-founded Eugenics Education Society, became first president

1909 Founded The Eugenics Review, a monthly journal

1909 Finally knighted for his sins

1911 Died in Haslemere, Surrey. Funded Galton Chair of Eugenics in his will, a position subsequently held at University College London by Karl Pearson, Ronald Fisher, and Lionel Penrose who took Eugenics at UCL until a least 1965, And until 1975, if we include the fourth Galton professor (now of human genetics) Harry Harris, and until the current day, if we include some of the contributions by the UCL Department of Psychiatry. Meanwhile the subject of Economics has also been amoralised by Galtonian eugenics,

The Galton Institute (formerly the Eugenics Society and now interested in human genetics) has survived Galton to this very day, as has the abuse by maltreatment of people with mental health issues,,


                                                 Pearson and Galton in 1910

This is how Scott Forster and I advised the Commission of Inquiry into the History of Eugenics at UCL during July 2019: Please click on UCL Written Contribution for full report, and on UCL Verbal
for what I said to the Committee,

Sir FRANCIS GALTON (1822-1911) spent much of his life exploring variation in human populations, and its implications. See for example his work Hereditary Genius (1869).

In 1883, Galton coined the term Eugenics. In his book Enquiries into Human Faculty and its DevelopmentGalton called for eugenic marriages promoting 'able' married couples to have children, and advocated endowments for these couples(p214). As indicated later, some of the material in this book (on criminals and insanity) would appear to amount to pseudo-science.

Following his publication of Hereditary Genius, Galton's “quest for data and accountability”i would involve treating human beings as open to classification and categorisation in the same way as plants or animals. Playing with themes of 'degeneration' and 'contagion' Galton called for restrictions on those he deemed genetically inferior.

According to Francis GaltonBritish Psychologist ii , which references Jensen (2002)Simonton (2003), and Irvine (1986)
“It seemed obvious and even unarguable to Galton that, from a eugenic viewpoint, superior mental and behavioural capacities, as well as physical health, are advantageous, not only to an individual but for the well-being of society as a whole (Jensen,2002). Within this mindset led the inevitable value-laden categorization or ranking of populations based on measurable traits and natural ability”.
The article continues that “It followed that Galton estimated from his field observations in Africa that the African people were 'two grades' below Anglo-Saxons' position in the normal frequency distribution of general mental ability, which gave claim to the scientific validation of Africans' mental inferiority compared with Anglo-Saxons (Jensen, 2002); findings that continued to spark controversy in academia today”.

This proves that Galton was a racist in the worst possible terms. He imposed his white supremacist measures of mental ability on Africans and used statistical 'science' to justify British Colonialism.

Furthermore "Galton was the first to 'demonstrate' that the Laplace-Gauss distribution or the "normal distribution" could be applied to human psychological attributes, including intelligence (Simonton, 2003). From this finding, he coined the use of percentile scores for measuring relative standing on various measurements in relation to the normal distribution (Jensen, 2002). He even established the world's first mental testing centre, in which a person could take a battery of tests and receive a written report of the results (P. Irvine, 1986). Given the dubious nature of the statistical methodology (see below), this method of psycho-analysis would appear to be open to question,

All of this was played out against a growing recognition of the rottenness of an increasingly industrialized and urbanized Britain. See Andrew Mearn's 1883 publication The Bitter Outcry of Outcast London and the 1890 appearance of William Booth's In Darkest England and the Way Out.

Eugenics was not universally popular in its heydays. Early critics of Eugenics included Lester Frank Ward, GK Chesterton(see his 1917 book Eugenics and Other Evils), Franz Boas, Halliday Sutherland, and Aldous HuxleyLiberal MP Josiah Wedgwood would speak against the 1913 Mental Deficiency Act. This Actthough containing elements of welfare state provision, also made judgements on mental abilities as if they were fixed and biological rather than the result of material social conditions.

The early eugenicists cannot therefore be exonerated on the grounds that their preachings were unquestioned at that time.

The Eugenics Education Society was founded in 1907 by Galtoniii who acted as its first president until his death. From 1926 the Society was renamed the Eugenics Society and later became the Galton Institute Eugenics. (Lucy Bland and Lesley A. Hall, Oxford Handbook of the history of Eugenics2010P.214)

It has been said  that  Galton's “new science spread like wildfire in the UK and USA” (Grenon and Merrick, Intellectual and Developmental Difficulties, Front Public Health, 2014).

In 1907, The State of Indiana passed a law enabling the prevention of the “procreation of confirmed criminals, idiots, imbeciles and rapists” ivwhich is claimed to be the world's first eugenic lawv.

Galton's efforts to improve the human race by the selective breeding of those with perceived greatest talent, must, at that time, have been interpreted by one and all as discriminating against those with less supposed talent. Furthermore, by setting his own standards, he tried to mould the population towards what he a wealthy Victorian colonialist would want it to be.

When judging the merits of different people, Galton and his followers fitted the 'Laplacian or Gaussian distribution' to observations of a large variety of measures e.g. of mental ability. Some of his followers fitted this distribution to measures of 'inferiority' or of 'feeble-mindedness' (including 'idiocy' and 'imbecility').

Galton and Pearson had the temerity to rename this the 'Normal' distribution even though this probability distribution is not valid that frequently in practice when modelling statistical observations. (Owing to the Central Limit Effect the normal curve is however frequently accurate for describing the distributions of test statistics, though only under specific theoretical assumptions). The reasons for using the term 'normal' would appear to be highly political. It enabled Galton and his followers to regard too many people as 'abnormal'. Galton had an obsession with the normal distribution because of the theoretically derived Central Limit Effect and falsely believed that a great many variables are normally distributed,

The normal distribution has a bell-shaped density with remarkably thin, symmetric tails. In practice, and as noticed by many twentieth century statisticians, many data sets are better describable by probability distributions whose densities have at least one thicker tail that the normal. For example, an individual discarded as 'mentally defective' or;'feeble-minded', because his arbitrary measure of 'feeble-mindedness' lies below the naively estimated third population percentile, might be falsely discarded, since the actual third population percentile could be considerably smaller.

According to Bernard Norton in Karl Pearson and Statistics: The Social Origins of Scientific Innovation (Social Studies in Science, 1978, P.8-9),
“In the 1890s, Francis Galton was one of Britain's leading 'men of science'. As several authors have pointed out, he was a man motivated by strong eugenic views, a man whose attempts to understand human heredity were inspired by the hope of showing the dominance of nature over nurture; and this, in turn, led him to uncover certain crucial statistical notions - notably those of a distribution of variations, of correlation and of regression. Before 1900, Galton was able to attract only a small following for eugenics, which remained more of a catalyst to research than a social movement. But, as several authors have noted, the events of the Boer war, coming as they did in a period occupied with a 'quest for national efficiency', were to pave the way for a strong popular interest in eugenics in the first decade of the twentieth century”.

It should be noted that statistical correlation is a very dangerous concept. In A Treatise of Human Nature, David Hume (1738-40) argued that correlation can never be used to prove causality. Moreover, statistical correlations are all too often potentially spurious vi in the sense that further 'confounding variables' may become apparent which render any observed correlation between the two variables of interest to be meaningless.

For example, any supposed correlations between measures of mental ability and any other key variables, e.g. social status, are potentially spurious. In addition to abnormal behaviour and very low scores on IQ tests, eugenicists frequently linked "feeble-mindedness"to promiscuity, criminality, and social dependency.

Galton's somewhat farcical discourse on criminality and the insanevii describes numerous very subjective supposed correlations many of which should be treated with a pinch of salt. This has the appearance of pseudo-science.

According to Hailey McKinnon ,Galton took Eugenics as "the science of improving stock", not only by judicious mating, but whatever tends to give the more suitable races or strains of blood a better chance of prevailing over the less suitable than they otherwise would have”viii.

The Liberal Welfare Reforms of 1906 to 1914 led to the beginnings of the British welfare state. Often benevolently remembered, they were pushed by Fabian eugenicists and imperialists of the Liberal Party (particularly the group known as the 'Co-efficients') with ties to the 'National Efficiency' movement in Britain who feared degeneration of 'the British race' might lead to the loss of the British empire. This fear was generated by battle losses and large rejection of potential recruits during the Second Boer Warix.

On the 5 June 1873, Galton wrote an extremely racist letter to the Times of London, entitled Africa for the Chinese (see Appendix B)During this long rant, Galton poured scorn and ridicule on the 'inferior Negro race', Hindus, and Arabs.

Galton also expressed anti-semitic opinions. Othe 27th of October 1884, Galton wrote to Alphone de Candolle, “It strikes me that the Jews are specialized for a parasitical existence upon other nations, and that there is need of evidence that they are capable of fulfilling the varied duties of a civilized nation by themselves”x.
The German doctor Alfred Ploetz proposed his theory of 'racial hygiene' (Rassen-hygiene; race-based Eugenics in his 1895 book Racial Hygiene Basics.(Grundlinien einer Rassenhygiene).

In her book From Racism to Genocide, Anthropology in the Third Reich, Gretchen E. Schafft (2004, P.43) describes the influence of Galton on Ploetz. Ploetz attended Galton's
1904 lecture Eugenics, its Definition, Scope, and Aims.

Then in 1905, Ploetz created the German Society for Racial Hygienexi, which was renamed the International Society for Racial Hygiene in 1907. This Berlin-based society maintained good relations with Francis Galton and his British Eugenics Society and other Eugenics societies around the world. How much Galton and Pearson influenced what happened next is open to question. Ploetz would go on to advise the Nazis about racial policyxii.

Sunday, 6 October 2019



                                                          Tom Leonard October 2019


More and more clinical trials for medical and psychiatric medications are failing to adhere to key standards i.e randomization of the subjects and replication of the experiment to reduce the effects of unlucky randomization (See Fundamentals of Clinical Trials by Friedman et al, Springer, 1981). However, without such standards, any statistical conclusions are at best highly subjective and at worst totally misleading. An incisive comment by Dr. Ewart Shaw, who I met a couple of years ago at the University of Warwick, is appended, and Professor Gillian Raab of Edinburgh University is preparing a paper on further justifications of randomisation in clinical trials..

***While there are some practical problems in the implementation of randomisation it should always be attempted in order to avoid functionally useless and possibly harmful experiments on multiple human subjects.*** 

EXAMPLE: While the highly questionable CATIE study (Lieberman et al, 2005Shortreed and Moodie, 2012) randomly assigned  1493 patients with chronic schizophrenia to different, very harmful atypical anti-psychotics, 74% of these patients discontinued their drugs within 15 months of treatment. Many of these were harmed by side effects and others by receiving inappropriate treatment for their condition. If the patients hadn't been assigned at random. then the ill-gotten conclusions would have been totally useless. As it was, the 1493 patients in the study were not chosen at random from any large 'population of interest'. Therefore, the results obtained by the multitudinous co-authors were ungeneralisable in the sense that they were effectively irrelevant to any large population. 

      Shortreed and Moodie later used these results while attempting to justify a horribly irresponsible 'optimal scheme' for switching patients between different anti-psychotics and different sets of potentially harmful side effects


Professor Jeffrey Lieberman, erstwhile Chairman of Psychiatry,  Columbia University, a place where Lucifer lingers.  

I have sent a copy of this article to the editor of a forthcoming Springer volume where, I understand, some of the contributing authors, guided by wishy washy philosophies and offbeat philosophers, are still advocating Lindley-Novick exchangeability assumptions as an excuse to avoid randomization. I think that this is very misleading for practitioners, and puts human lives at risk.

       I CHALLENGE THE EDITOR (who is a leading medical statistician) to respond as a comment on this blogpost. and to say why a volume with several harmful chapters should be published at all,. 




                                     1. Single Sample of Binary Observations

Suppose that it is required to estimate the proportion θ out of the N people in a population S who suffer from a disease D, and that a sample of size n is selected for this purpose. For i=1,n, let
                                  x(i)= 1 if the ith. person suffers from disease D
                                          0 otherwise.

      If EITHER the n people have been chosen at random WITH replacement from S
OR the n people have been chosen at random WITHOUT replacement from S, and N is large
compared with n,

      THEN the binary responses x(1), ----,x(n) may be taken to be independent with common
expectation θ.

In this case, the observed frequency


possesses a binomial distribution with probability θ and sample size n. Consequently the sample
proportion z=y/n is an unbiased estimator of θ with variance θ(1-θ)/n.

For example, when the true θ is 0.01 and n=10000, z had expectation 0.01 and standard deviation 0.0001.

If θ is unknown, n=10.000 and we observe z=0.0102, then this is an unbiased estimate of θ with an estimated standard error of about 0.0001. We can therefore be approximately 95.44% confident that the true θ lies in the interval (0.0100, 0.0104).

Note that such very large samples sizes are needed to evaluate population proportions to reasonable degrees of accuracy even when the data result from a controlled, randomized experiment.

Unfortunately, if we have purely observational data, where no randomization is completed at the
design stage, then there are no grounds, without further assumption, for taking the binary responses to be independent or indeed to possess a common mean. Indeed, the 'obvious' assumption that y possesses a binomial distribution would be at best highly subjective and at worst misleading, as would any assumptions about the expectation and variance of z. The binomial  assumption is nevertheless all too frequently made in practice, often on grounds of (simple minded!) 'simplicity'.A suitably parametrised hypergeometric distribution for y when the sampling is without replacement again demands that the sample should be chosen at random, in which case it is exact rather than approximate ..

One possibility when analysing non-randomized data would be to follow Lindley and Novick (1981)
by taking x(1),---, x(n) to be exchangeable in the sense of De Finetti (1937), i.e. in formal terms by
assuming that the joint distribution of these binary responses is invariant under any permutation of the suffices. In subjective terms, you could make this assumption a priori if you feel before viewing the n observations that you have a symmetry of information about these binaty responses.

Exchangeability implies that each x(i) possesses a binary distribution with a common expectation θ ,
and hence that z is an unbiased estimator of this expectation, which by a conceptual leap could be taken to be the unknown population proportion. However, it does NOT imply that the binary responses are independent, or that y possesses a binomial distribution (or a hypergeometric distribution when the sampling is without replacement). For example, no estimable standard deviation fo z is obviously available.

Suppose, in conceptual terms, that we would be prepared to assume exchangeability of the binary
responses whatever the value of n (for full mathematical rigour we would need address the situation where the sampling is with replacement, so that arbitrary large values of n can be considered, If the sampling is instead without replacement, the two-stage structure described below will be completely general whenever the x(i) are positively correlated, ).

Then De Finetti's much celebrated exchangeability theorem then tells us that the joint distribution of the binary responses must be describable in the following two stages, for some choice of the c.d.f. F:

Stage 1: Conditional on the value of a random variable u on the unit interval (0,1), the responses
x(1),---,x(n) possess independent binary distributions with common expectation u

Stage 2: The random variable u possesses c.d.f. F and expectation θ.

For example, let F denote the c.d.f. of a beta distribution with parameters
α= γθ and β= γθ(1-θ),and hence with mean θ and variance θ(1-θ)/ (γ+1).

Then the observed y possesses a beta-binomial distribution with mean n θ , and variance
var (y)= nτθ(1-θ) where τ= (n+γ)/ (1+γ) is the overdispersion factor ( τ tends to unity as γ tends to infinity, in which case u has mean and zero variance, corresponding to the previous binomial assumption for y).

Consequently, the unbiased estimator z has mean θ and variance τθ(1-θ)/n. A large value of τ would greatly inflate this variance, together with any estimated standard error for z.

Unfortunately, F is not identifiable, beyond its mean θ , from the current data set. For example, if F
is taken to be the c.d.f. of a beta distribution, then the overdispersion factor τ is not identifiable from the current data, whatever the sample size!!

This is because the joint probability mass function of the binary responses, unconditional on u,
depends only on the observed frequency y, the sample size n and the unknown c.d.f. F. When viewed as a functional of F, this is the likelihood functional of F, given the data, and therefore summarises the
information in the data about F.

As the likelihood functional only depends upon the data via the one-
dimensional statistic y, nothing about F apart from the mean θ can be estimated from the data. (The likelihood can be expressed as an expectation with respect to u  given F of a function of u and y)

Consequently the Lindley-Novick exchangeability assumption is of very limited use indeed,
While it justifies unbiased estimation of the population proportion it does not justify more
general inferences,unless large amounts of information (e.g. prior information) from other
sources are combined with the information in the sample,

Moreover, replication does not obviously help. Suppose that we take r samples of size n from
thesame population S. Then, without randomisation it would not be obvious how to justify an assumption of independence of the m samples of binary responses.

If the rxn responses are instead taken to be exchangeable, then De Finetti's theorem implies
their conditional independence, but nothing more than their common mean can be estimated
from the replicated data set.


                                            Dennis Lindley (my Ph.D, supervisor at UCL 1971-73)
                                            When Dennis was appointed to the Chair of Statistics at UCL
                                            in 1967 it was said that it was 'as if a Jehovah's Witness had
                                            become Pope." One of the first papers he gave me to read was
                                            De Finetti's 1937 paper on subjective probability and 
                                            exchangeability. It was like an edict from Rome

                                                          2. Clinical Trials
     In clinical trials comparing the recovery rate for patients receiving a drug with the rate for patients receiving a placebo, some patients should be assigned at random to the treatment group, and the remainder to the control group, and reference is not always made, as it should be, to a larger population.

     Lindley and Novick quite amazingly claim that their particular exchangeability assumptions replace the need for randomization in clinical trials (in particular those meriting the comparison of binary observations in a treatment group with those in a control group). However, by obvious mathematical extensions of the arguments of section 1 (which now take N to denote the total number of patients participating in the trial) no valid statistical inferences can be drawn from such trials without further strong assumption. Similar arguments hold for more complex clinical trials.

   Moreover, Lindley and Novick claim that subjective assumptions of exchangeability can be used to resolve the classical Simpson's paradox./confounding variable problem in clinical trials. This is blatantly untrue in any objective sense. For further discussion of Simpson's paradox in this context see Leonard (1999, Ch.3), where the paradox and its resolution by randomization is described in detail via a three-directional approach.


                                                                       Ewart Shaw

                                                     Comment from Dr, Ewart Shaw 

I agree completely (also about the more general case). I discussed this briefly with Dennis (Lindley) over thirty years ago, mainly saying that because of the financial and other pressures on researchers, and the scope for unintentional or intentional bias, I couldn't trust any non-randomised trial, and would hope that those responsible for making possibly far-reaching decision based on the trial's results wouldn't trust it either, no matter how clever the model and well-intentioned the researchers. So the researchers would have carried out a functionally useless experiment on human subjects, which is simply immoral. I'm not sure how convinced Dennis was by my arguments! 


         I would like to thank Gillian Raab for recent discussions on this topic. Gillian is working on other justifications of randomisation in clinical trials, She spent part of her career working with Professor David Finney, who was famous for developing systems which sought to ensure the safety of drugs,

Thursday, 3 October 2019



I  am concerned that neurotoxic psychiatric medications may cause changes in gene expression which manifest themselves in all of the well-versed damaging physical side effects. I am not myself by any means an expert in genetics. So please read on, and decide for yourselves!

                                                                  EPIGENETICS (FUNDAMENTALS)

  QUOTE: Epigenetics is the study of heritable changes in gene expression (active versus inactive genes) that do not involve changes to the underlying DNA sequence — a change in phenotype without a change in genotype — which in turn affects how cells read the genes. Epigenetic change is a regular and natural occurrence but can also be influenced by several factors including age, the environment/lifestyle, and disease state. Epigenetic modifications can manifest as commonly as the manner in which cells terminally differentiate to end up as skin cells, liver cells, brain cells, etc. Or, epigenetic change can have more damaging effects that can result in diseases like cancer.

                                                                 DNA IS NOT DESTINY


DNA Is Not Destiny: The New Science of Epigenetics

Discoveries in epigenetics are rewriting the rules of disease, heredity, and identity.

By Ethan Watters|Wednesday, November 22, 2006
With no more than a change in diet, laboratory agouti mice (left) were prompted to give birth to young (right) that differed markedly in appearance and disease susceptibility.
Back in 2000, Randy Jirtle, a professor of radiation oncology at Duke University, and his postdoctoral student Robert Waterland designed a groundbreaking genetic experiment that was simplicity itself. They started with pairs of fat yellow mice known to scientists as agouti mice, so called because they carry a particular gene—the agouti gene—that in addition to making the rodents ravenous and yellow renders them prone to cancer and diabetes. Jirtle and Waterland set about to see if they could change the unfortunate genetic legacy of these little creatures.