In the early 20th century data analysis was constrained by computability. Calculations were performed by hand, providing real practical limits on the types of problems which were tractable. Salsburg (2002) provides a calculation showing that at least 8 months of 12-hour days would have been required for R. A. Fisher to have produced the tables in his \Studies in Crop Variation I\" (Fisher 1921) with the mechanical means at his disposal. It is hardly surprising that the emphasis during this period remained on linear models - problems soluble by ordinary least squares with the tools at hand. It was not until the 1960s that nonlinear regression began to appear regularly in the literature and no accident that this eventuates concurrently with the appearance of machines to automate iterative calculations. The heavier computational burden had previously been insurmountable. But even after the advent of early computers great emphasis was placed on the development of algorithms which could make effcient use of limited hardware resources - processors were slow and memory limited. Research into algorithms became synonymous with effciency and the attendant \\(O\\)-notation. The Fast Fourier Transform of Cooley and Tukey (1965) provides the archetypal example of the era. The explicit reference to speed in the title underscores the imperative. In the early 21st century the situation has improved markedly. Computing power is cheap and relatively abundant and software is designed with re-use of objects and systems integration in mind. There has been a co-evolution of research into modelling methods. Modelling frameworks have diversified and are now capable of representing a much broader range of observable phenomena. Informed by Tukey's observation \"Far better an approximate answer to the right question ... than an exact answer to the wrong question\" (Tukey 1962) we build models which more accurately reflect our understanding of reality. Increasingly we are asking the right questions. Indeed since the 1970s there has been rapid development in methods which extend the general linear model stimulated by the development of Generalized Linear Models (GLMs) (McCullagh and Nelder 1989). These allow response residuals to be modelled using alternatives to the Gaussian distribution and conditional expectations to be related to covariates via a link function \\(‚àÜvª(˜í‚àè\\)) rather than a direct linear relationship. The general linear model can then be seen as a GLM with an identity link function and a normally distributed response. The principal appeal of the framework is that the adoption of the exponential family as the basis guarantees the likelihood to be log-concave and unimodal so that estimation is straightforward. The adoption of GLMs has greatly extended the realm of linear models and vastly enhanced the scope of linear statistical modelling applications. What remains conspicuously absent is concurrent progress of a similar order in pursuit of nonlinear models where the properties of the solution surface are more complex. The adoption of Markov Chain Monte Carlo (MCMC) techniques by the statistical community represents a significant new chapter in stochastic modelling. MCMC methods provide a flexible and powerful base from which realistic stochastic models can be built. They are particularly important because models developed in this framework need not have analytical tractability. Provided that the relationships between the component parts are specified samples can be obtained from the density of the resulting model allowing estimation and inference from non-standard distributions. This is an enormously empowering development. Most importantly it promotes the construction of more realistic models. Data no longer need be forced into overly simplistic models just because they are the only soluble forms. Models can now be developed to fit available data. The development of MCMC tools have fundamentally altered the way that statisticians go about their business. But it is not merely statisticians who benefit. Greater accessibility of realistic modelling methods has led to statistical modelling taking a firm hold in primary research across a wide range of applied disciplines. The exchange of purely deterministic models in favour of more realistic stochastic models represents a paradigm shift in the foundation of science. This thesis considers the application of Markov Chain Monte Carlo (MCMC) methods to problems which extend the general linear model in various nonlinear ways. The framework in which these applications are developed is exclusively Bayesian though the the methods themselves are equally applicable if perhaps less commonly used in a likelihood context. Geyer (1995) and Diebolt and Ip (1995) and the references therein provide details of non-Bayesian applications of MCMC."
History
Publication status
Unpublished
Rights statement
Copyright 2010 the author - The University is continuing to endeavour to trace the copyright owner(s) and in the meantime this item has been reproduced here in good faith. We would be pleased to hear from the copyright owner(s).