Statistical methods date back at least to the 5th century BC. Some scholars pinpoint the origin of statistics to 1663, with the publication of Natural and Political Observations upon the Bills of Mortality by John Graunt.[7] Early applications of statistical thinking revolved around the needs of states to base policy on demographic and economic data, hence its stat- etymology. The scope of the discipline of statistics broadened in the early 19th century to include the collection and analysis of data in general. Today, statistics is widely employed in government, business, and natural and social sciences. Its mathematical foundations were laid in the 17th century with the development of the probability theory by Blaise Pascal and Pierre de Fermat. Mathematical probability theory arose from the study of games of chance, although the concept of probability was already examined in medieval law and by philosophers such as Juan Caramuel.[8] The method of least squares was first described by Adrien-Marie Legendre in 1805. The modern field of statistics emerged in the late 19th and early 20th century in three stages.[9] The first wave, at the turn of the century, was led by the work of Sir Francis Galton and Karl Pearson, who transformed statistics into a rigorous mathematical discipline used for analysis, not just in science, but in industry and politics as well. Galton's contributions to the field included introducing the concepts of standard deviation, correlation, regression and the application of these methods to the study of the variety of human characteristics - height, weight, eyelash length among others.[10] Pearson developed the Correlation coefficient, defined as a product-moment,[11] the method of moments for the fitting of distributions to samples and the Pearson's system of continuous curves, among many other things.[12] Galton and Pearson founded Biometrika as the first journal of mathematical statistics and biometry, and the latter founded the world's first university statistics department at University College London.[13] The second wave of the 1910s and 20s was initiated by William Gosset, and reached its culmination in the insights of Sir Ronald Fisher, who wrote the textbooks that were to define the academic discipline in universities around the world. Fisher's most important publications were his 1916 seminal paper The Correlation between Relatives on the Supposition of Mendelian Inheritance and his classic 1925 work Statistical Methods for Research Workers. His paper was the first to use the statistical term, variance. He developed rigorous experimental models and also originated the concepts of sufficiency, ancillary statistics, Fisher's linear discriminator and Fisher information.[14] The final wave, which mainly saw the refinement and expansion of earlier developments, emerged from the collaborative work between Egon Pearson and Jerzy Neyman in the 1930s. They introduced the concepts of "Type II" error, power of a test and confidence intervals. Jerzy Neyman in 1934 showed that stratified random sampling was in general a better method of estimation than purposive (quota) sampling.[15] Today, statistical methods are applied in all fields that involve decision making, for making accurate inferences from a collated body of data and for making decisions in the face of uncertainty based on statistical methodology. The use of modern computers has expedited large-scale statistical computational, and has also made possible new methods that are impractical to perform manually. |
About us|Jobs|Help|Disclaimer|Advertising services|Contact us|Sign in|Website map|Search|
GMT+8, 2015-9-11 22:08 , Processed in 0.156081 second(s), 16 queries .