Эксоцман
на главную поиск contacts
В разделе собрана информация о статьях по экономике, социологии и менеджменту. Во многих случаях приводятся полные тексты статей. (подробнее...)

Journal of the American Statistical Association

Опубликовано на портале: 23-04-2003
Theodore W. Anderson, Donald A. Darling Journal of the American Statistical Association. 1954.  Vol. 49. No. 268. P. 765-769. 
Some (large sample) significance points are tabulated for a distribution-free test of goodness of fit which was introduced earlier by the authors. The test, which uses the actual observations without grouping, is sensitive to discrepancies at the tails of the distribution rather than near the median. An illustration is given, using a numerical example used previously by Birnbaum in illustrating the Kolmogorov test.
ресурс содержит полный текст, либо отрывок из него ресурс содержит прикрепленный файл
Опубликовано на портале: 06-04-2004
George E.P. Box, D. A. Pierce Journal of the American Statistical Association. 1970.  Vol. 65. No. 332. P. 1509-1526. 
Many statistical models, and in particular autoregressive-moving average time series models, can be regarded as means of transforming the data to white noise, that is, to an uncorrelated sequence of errors. If the parameters are known exactly, this random sequence can be computed directly from the observations; when this calculation is made with estimates substituted for the true parameter values, the resulting sequence is referred to as the "residuals," which can be regarded as estimates of the errors. If the appropriate model has been chosen, there will be zero autocorrelation in the errors. In checking adequacy of fit it is therefore logical to study the sample autocorrelation function of the residuals. For large samples the residuals from a correctly fitted model resemble very closely the true errors of the process; however, care is needed in interpreting the serial correlations of the residuals. It is shown here that the residual autocorrelations are to a close approximation representable as a singular linear transformation of the autocorrelations of the errors so that they possess a singular normal distribution. Failing to allow for this results in a tendency to overlook evidence of lack of fit. Tests of fit and diagnostic checks are devised which take these facts into account.
Опубликовано на портале: 06-04-2004
David A. Dickey, Wayne A. Fuller Journal of the American Statistical Association. 1979.  Vol. 74. No. 366. P. 427-431. 
Abstract Let $n$ observations $Y_1, Y_2, \ldots, Y_n$ be generated by the model $Y_t = \rho Y_{t - 1} + e_t$, where $Y_0$ is a fixed constant and $\{e_t\}_{t = 1}^n$ is a sequence of independent normal random variables with mean 0 and variance $\sigma^2$. Properties of the regression estimator of $\rho$ are obtained under the assumption that $\rho = \pm 1$. Representations for the limit distributions of the estimator of $\rho$ and of the regression $t$ test are derived. The estimator of $\rho$ and the regression $t$ test furnish methods of testing the hypothesis that $\rho = 1$.
ресурс содержит полный текст, либо отрывок из него
Опубликовано на портале: 20-07-2004
Malay Ghosh, Narinder Nangia, Dal Ho Kim Journal of the American Statistical Association. 1996.  Vol. 91. No. 436. P. 1423-1431. 
This article develops a general methodology for small domain estimation based on data from repeated surveys. The results are directly applied to the estimation of median income of four-person families for the 50 states and the District of Columbia. These estimates are needed by the U.S. Department of Health and Human Services (HHS) to formulate its energy assistance program for low income families. The U.S. Bureau of the Census, by an informal agreement, has provided such estimates to HHS through a linear regression methodology since the latter part of the 1970s. The current method is an empirical Bayes method (EB) that uses the Current Population Survey (CPS) estimates as well as the most recent decennial census estimates updated by the per capita income estimates of the Bureau of Economic Analysis. However, with the existing methodology, standard errors associated with these estimates are not easy to obtain. The EB estimates, when used naively, can lead to underestimation of standard errors. Moreover, because the sample estimates are collected through the CPS every year, there is a very natural time series aspect of the data that is currently ignored. We have performed a full Bayesian analysis using a hierarchical Bayes (HB) time series model. In addition to providing the median income estimates as the posterior means, we have provided also the posterior standard deviations. Included in our model is the information on the median incomes of three- and five-person families as well. In this way a multivariate HB procedure is used. The Bayesian analysis requires evaluation of high-dimensional integrals. We have overcome this problem by using the Gibbs sampling technique, which has turned out to be a very convenient tool for Monte Carlo integration. Also, we have validated our results by comparing them against the 1989 four-person median income figures obtained from the 1990 census. We used four different criteria for such comparisons. It turns out that the estimates obtained by using a bivariate time-series model are the best overall. We use a criterion based on deviances for model selection and also provide a sensitivity analysis of the proposed hierarchical model.
ресурс содержит гиперссылку на сайт, на котором можно найти дополнительную информацию
Опубликовано на портале: 19-11-2007
Andrew W. Lo Journal of the American Statistical Association. 2000.  Vol. 95. No. 450. P. 629-635. 
Ever since the publication in 1565 of Girolamo Cardano's treatise on gambling, Liber de Ludo Aleae (The Book of Games of Chance), statistics and financial markets have become inextricably linked. Over the past few decades many of these links have become part of the canon of modern finance, and it is now impossible to fully appreciate the workings of financial markets without them. This selective survey covers three of the most important ideas of finance---efficient markets, the random walk hypothesis, and derivative pricing models---that illustrate the enormous research opportunities that lie at the intersection of finance and statistics
ресурс содержит полный текст, либо отрывок из него
Опубликовано на портале: 06-04-2004
Danny D. Dyer, Jerom P. Keating Journal of the American Statistical Association. 1980.  Vol. 75. No. 370. P. 313-319. 
The exact critical values for Bartlett's test for homogeneity of variances based on equal sample sizes from several normal populations are tabulated. It is also shown how these values may be used to obtain highly accurate approximations to the critical values for unequal sample sizes. An application is given that deals with the variability of log bids on a group of federal offshore oil and gas leases.
Опубликовано на портале: 05-01-2003
Morton B. Brown, Alan B. Forsythe Journal of the American Statistical Association. 1974.  Vol. 69. No. 346. P. 364-367. 
Alternative formulations of Levene's test statistic for equality of variances are found to be robust under nonnormality. These statistics use more robust estimators of central location in place of the mean. They are compared with the unmodified Levene's statistic, a jackknife procedure, and a $\chi^2$ test suggested by Layard which are all found to be less robust under nonnormality.
Опубликовано на портале: 13-04-2004
Zvi Griliches, Potluri V. Rao Journal of the American Statistical Association. 1969.  Vol. 64. No. 325. P. 253-272. 
In a linear regression model, when errors are autocorrelated, several asymptotically efficient estimators of parameters have been suggested in the literature. In this paper we study their small sample efficiency using Monte Carlo methods. While none of these estimators turns out to be distinctly superior to the others over the entire range of parameters, there is a definite gain in efficiency to be had from using some two-stage procedure in the presence of moderate high levels of serial correlation in the residuals and very little loss from using such methods when the true $\rho$ is small. Where computational costs are a consideration a mixed strategy of switching to a second stage only if the estimated $\hat rho$ is higher than some critical value is suggested and is shown to perform quite well over the whole parameter range.
Опубликовано на портале: 20-07-2004
Larry S. Corder, Kenneth G. Manton Journal of the American Statistical Association. 1991.  Vol. 86. No. 414. P. 513-525. 
The rapid growth of the U.S. elderly (age 65+) and oldest-old (age 85+) populations, combined with their high per capita acute health and long-term care (LTC) service needs, raises concerns about existing health care payment systems. Adapting and designing new types of health insurance and existing health policies require accurate data on the elderly's health and functional characteristics. Strengths and weaknesses of five national health surveys in providing such data are evaluated. Methodological issues arising in surveying elderly populations and analyzing data from those surveys are discussed, with implications for designing private LTC insurance and for reducing future LTC service burden.
ресурс содержит гиперссылку на сайт, на котором можно найти дополнительную информацию