Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Statistics

Discipline
Institution
Publication Year
Publication
Publication Type
File Type

Articles 511 - 540 of 685

Full-Text Articles in Physical Sciences and Mathematics

A Note On The Sub-Optimality Of Rank Ordering Of Objects On The Basis Of The Leading Principal Component Factor Scores, Sudhanshu K. Mishra Dec 2008

A Note On The Sub-Optimality Of Rank Ordering Of Objects On The Basis Of The Leading Principal Component Factor Scores, Sudhanshu K. Mishra

Sudhanshu K Mishra

This paper demonstrates that if we intend to optimally rank order n objects (candidates) each of which has m rank-ordered attributes or rank scores have been awarded by m evaluators, then the overall ordinal ranking of objects by the conventional principal component based factor scores turns out to be suboptimal. Three numerical examples have been provided to show that principal component based rankings do not necessarily maximize the sum of squared correlation coefficients between the individual m rank scores arrays, X(n,m), and overall rank scores array, Z(n).


Predictability In The Indian Stock Market: A Study From An Econometric Perspective., Debabrata Mukhopadhyay Dr. Dec 2008

Predictability In The Indian Stock Market: A Study From An Econometric Perspective., Debabrata Mukhopadhyay Dr.

Doctoral Theses

No abstract provided.


Una Comparazione Tra Le Reti Di Amministratori Nelle Principali Societa Quotate In Italia, Francia E Gran Bretagna, Paolo Santella, Carlo Drago, Andrea Polo, Enrico Gagliardi Nov 2008

Una Comparazione Tra Le Reti Di Amministratori Nelle Principali Societa Quotate In Italia, Francia E Gran Bretagna, Paolo Santella, Carlo Drago, Andrea Polo, Enrico Gagliardi

Carlo Drago

The purpose of the present paper is to contribute to the empirical literature on country interlocks by illustrating and analysing the interlocking directorships in the first 40 Italian, French and British Blue Chips as of December 2007 (Italy)/March 2008 (France and uk). The theoretical literature identify two possible explanations for interlocking directorships, on the one hand the collusion among players in the same market or in general among enterprises that have business relations among themselves; on the other hand the interest for enterprises to have on their boards bankers, suppliers, and clients so as to reduce information asymmetries. Our findings …


Student Fact Book, Fall 2008, Thirty-Second Annual Edition, Wright State University, Office Of Student Information Systems, Wright State University Oct 2008

Student Fact Book, Fall 2008, Thirty-Second Annual Edition, Wright State University, Office Of Student Information Systems, Wright State University

Wright State University Student Fact Books

The student fact book has general demographic information on all students enrolled at Wright State University for Fall Quarter, 2008.


Using Local Data To Advance Quantitative Literacy, Stephen Sweet, Susanne Morgan, Danette Ifert Johnson Jul 2008

Using Local Data To Advance Quantitative Literacy, Stephen Sweet, Susanne Morgan, Danette Ifert Johnson

Numeracy

In this article we consider the application of local data as a means of advancing quantitative literacy. We illustrate the use of three different sources of local data: institutional data, Census data, and the National College Health Assessment survey. Our learning modules are applied in courses in sociology and communication, but the strategy of using local data can be integrated beyond these disciplinary boundaries. We demonstrate how these data can be used to stimulate student interests in class discussion, advance analytic skills, as well as develop capacities in written and verbal communication. We conclude by considering concerns that may influence …


Small Sample Methods For The Analysis Of Clustered Binary Data, Lawrence J. Cook May 2008

Small Sample Methods For The Analysis Of Clustered Binary Data, Lawrence J. Cook

All Graduate Theses and Dissertations, Spring 1920 to Summer 2023

There are several solutions for analysis of clustered binary data. However, the two most common tools in use today, generalized estimating equations and random effects or mixed models, rely heavily on asymptotic theory. However, in many situations, such as small or sparse samples, asymptotic assumptions may not be met. For this reason we explore the utility of the quadratic exponential model and conditional analysis to estimate the effect size of a trend parameter in small sample and sparse data settings. Further we explore the computational efficiency of two methods for conducting conditional analysis, the network algorithm and Markov chain Monte …


Docketology, District Courts, And Doctrine, David A. Hoffman, Alan J. Izenman, Jeffrey R. Lidicker Apr 2008

Docketology, District Courts, And Doctrine, David A. Hoffman, Alan J. Izenman, Jeffrey R. Lidicker

David A Hoffman

Empirical legal scholars have traditionally modeled judicial opinion writing by assuming that judges act rationally, seeking to maximize their influence by writing opinions in politically important cases. Support for this hypothesis has reviewed published opinions, finding that civil rights and other “hot” topics are more to be discussed than other issues. This orthodoxy comforts consumers of legal opinions, because it suggests that opinions are largely representative of judicial work. The orthodoxy is substantively and methodologically flawed. This paper starts by assuming that judges are generally risk averse with respect to reversal, and that they provide opinions when they believe that …


How Do You Interpret A Confidence Interval?, Paul Savory Jan 2008

How Do You Interpret A Confidence Interval?, Paul Savory

Industrial and Management Systems Engineering: Instructional Materials

A confidence interval (CI) is an interval estimate of a population parameter. Instead of estimating the parameter by a single value, a point estimate, an interval likely to cover the parameter is developed. Many student incorrectly interpret the meaning of a confidence interval. This paper offers a quick overview of how to correctly interpret a confidence interval.


Why Divide By (N-1) For Sample Standard Deviation?, Paul Savory Jan 2008

Why Divide By (N-1) For Sample Standard Deviation?, Paul Savory

Industrial and Management Systems Engineering: Instructional Materials

In statistics, the sample standard deviation is a widely used measure of the variability or dispersion of a data set. The standard deviation of a data set is the square root of its variance. In calculating the sample standard deviation, the divisor is the number of samples in the data set minus one (n-1) rather than n. This often confuses students. This paper offers a quick overview of why the divisor is (n-1) for calculating the sample standard deviation.


The Impact Of Directionality In Predications On Text Mining, Gondy Leroy, Marcelo Fiszman, Thomas C. Rindflesch Jan 2008

The Impact Of Directionality In Predications On Text Mining, Gondy Leroy, Marcelo Fiszman, Thomas C. Rindflesch

CGU Faculty Publications and Research

The number of publications in biomedicine is increasing enormously each year. To help researchers digest the information in these documents, text mining tools are being developed that present co-occurrence relations between concepts. Statistical measures are used to mine interesting subsets of relations. We demonstrate how directionality of these relations affects interestingness. Support and confidence, simple data mining statistics, are used as proxies for interestingness metrics. We first built a test bed of 126,404 directional relations extracted from biomedical abstracts, which we represent as graphs containing a central starting concept and 2 rings of associated relations. We manipulated directionality in four …


Comparison Of Career Statistics And Season Statistics In Major League Baseball, Mark Joseph Ammons Jan 2008

Comparison Of Career Statistics And Season Statistics In Major League Baseball, Mark Joseph Ammons

Electronic Theses and Dissertations

This is a comparison of statistics for some of the best seasons and careers of players from Major League Baseball; using data collected on batting average, at bat to homerun ratio, and earned run average. Two teams were created, composed of season leaders and career leaders, chosen for their outstanding offensive and pitching abilities, and were pitted against one another to determine superiority. These two teams also compared against a team from each era of major league baseball. The season and career leaders challenged, the 1918 Boston Red Sox, 1927 New York Yankees, 1955 Brooklyn Dodgers, 1961 New York Yankees, …


Scientifically Based Research In Quantitative Literacy: Guidelines For Building A Knowledge Base, Richard L. Scheaffer Dec 2007

Scientifically Based Research In Quantitative Literacy: Guidelines For Building A Knowledge Base, Richard L. Scheaffer

Numeracy

Research in quantitative literacy (QL) is in its infancy, so now is the time to begin a regimen for healthy growth into adulthood. As a new discipline still defining itself, QL has the opportunity to build a sound infrastructure for accumulating a solid body of interconnected research that will serve the discipline well in years to come. To that end, much can be learned from recent studies of the weaknesses of mathematics education research and recommendations on how to overcome them. Mathematics education lacks a strong research foundation, one that is scientific, cumulative, interconnected, and intertwined with teaching practice. These …


Probability And Statistics For Third Through Fifth Grade Classrooms., Melissa Taylor Mckinnon Dec 2007

Probability And Statistics For Third Through Fifth Grade Classrooms., Melissa Taylor Mckinnon

Electronic Theses and Dissertations

This document contains a variety of lesson plans that can be readily used by a teacher of intermediate students. This thesis contains two units in Probability and one unit in Statistics. Any educator can supplement this document with any curriculum to teach lessons from vocabulary to concept.


Biogeographical Distribution And Natural Groupings Among Five Sympatric Wild Cats In Tropical South Asia, Mohammed Ashraf Oct 2007

Biogeographical Distribution And Natural Groupings Among Five Sympatric Wild Cats In Tropical South Asia, Mohammed Ashraf

Mohammed Ashraf

Small to large carnivorous mammals in the tropical belt face extinction at an unprecedented rate. The vanishing of sympatric wild cats appears to be due to habitat fragmentation, human encroachment & poaching. The focus of this study is on ecological and distributional parameters that influence the wild cat communities in tropical South Asia. The distributional data for five sympatric cats is analyzed with the aim of understanding the species-habitat association under a conceptually unified binary-matrix framework. The use of cluster analysis techniques in this ecological study have helped to reveal the natural groupings among felid guilds and their ecological resource …


Student Fact Book, Fall 2007, Thirty-First Annual Edition, Wright State University, Office Of Student Information Systems, Wright State University Oct 2007

Student Fact Book, Fall 2007, Thirty-First Annual Edition, Wright State University, Office Of Student Information Systems, Wright State University

Wright State University Student Fact Books

The student fact book has general demographic information on all students enrolled at Wright State University for Fall Quarter, 2007.


Sensitivity To Distributional Assumptions In Estimation Of The Odp Thresholding Function, Wendy Jill Bunn Jul 2007

Sensitivity To Distributional Assumptions In Estimation Of The Odp Thresholding Function, Wendy Jill Bunn

Theses and Dissertations

Recent technological advances in fields like medicine and genomics have produced high-dimensional data sets and a challenge to correctly interpret experimental results. The Optimal Discovery Procedure (ODP) (Storey 2005) builds on the framework of Neyman-Pearson hypothesis testing to optimally test thousands of hypotheses simultaneously. The method relies on the assumption of normally distributed data; however, many applications of this method will violate this assumption. This thesis investigates the sensitivity of this method to detection of significant but nonnormal data. Overall, estimation of the ODP with the method described in this thesis is satisfactory, except when the nonnormal alternative distribution has …


Bremsstrahlung In Α Decay Reexamined, H. Boie, Heiko Scheit, Ulrich D. Jentschura, F. Kock, M. Lauer, A. I. Milstein, Ivan S. Terekhov, Dirk Schwalm Jul 2007

Bremsstrahlung In Α Decay Reexamined, H. Boie, Heiko Scheit, Ulrich D. Jentschura, F. Kock, M. Lauer, A. I. Milstein, Ivan S. Terekhov, Dirk Schwalm

Physics Faculty Research & Creative Works

A high-statistics measurement of bremsstrahlung emitted in the α decay of 210Po has been performed, which allows us to follow the photon spectra up to energies of ~500keV. The measured differential emission probability is in good agreement with our theoretical results obtained within the quasiclassical approximation as well as with the exact quantum mechanical calculation. It is shown that, due to the small effective electric dipole charge of the radiating system, a significant interference between the electric dipole and quadrupole contributions occurs, which is altering substantially the angular correlation between the α particle and the emitted photon.


Minimo: A Search For Mini Proper Motion Stars In The Southern Sky, Charlie Thomas Finch May 2007

Minimo: A Search For Mini Proper Motion Stars In The Southern Sky, Charlie Thomas Finch

Physics and Astronomy Theses

I report 1684 new proper motion systems in the southern sky (declinations -90 degrees to -47 degrees) with 0.50 arcsec/yr > mu >= 0.18 arcsec/yr. This effort is a continuation of the SuperCOSMOS-RECONS (SCR) proper motion search to lower proper motions than reported in Hambly et al. (2004); Henry et al. (2004); Subasavage et al. (2005a,b). Distance estimates are presented for the new systems, assuming that all stars are on the main sequence. I find that 34 systems are within 25 pc, including three systems --- SCR 0838-5855, SCR 1826-6542, and SCR 0630-7643AB --- anticipated to be within 10 pc. These …


A Simulation-Based Approach For Evaluating Gene Expression Analyses, Carly Ruth Pendleton Mar 2007

A Simulation-Based Approach For Evaluating Gene Expression Analyses, Carly Ruth Pendleton

Theses and Dissertations

Microarrays enable biologists to measure differences in gene expression in thousands of genes simultaneously. The data produced by microarrays present a statistical challenge, one which has been met both by new modifications of existing methods and by completely new approaches. One of the difficulties with a new approach to microarray analysis is validating the method's power and sensitivity. A simulation study could provide such validation by simulating gene expression data and investigating the method's response to changes in the data; however, due to the complex dependencies and interactions found in gene expression data, such a simulation would be complicated and …


Data Mining Of Misr Aerosol Product Using Spatial Statistics, Tao Shi, Noel A. Cressie Jan 2007

Data Mining Of Misr Aerosol Product Using Spatial Statistics, Tao Shi, Noel A. Cressie

Faculty of Informatics - Papers (Archive)

In climate models, aerosol forcing is the major source of uncertainty in climate forcing, over the industrial period. To reduce this uncertainty, instruments on satellites have been put in place to collect global data. However, missing and noisy observations impose considerable difficulties for scientists researching global aerosol distribution, aerosol transportation, and comparisons between satellite observations and global-climate-model outputs. In this paper, we propose a Spatial Mixed Effects (SME) statistical model to predict the missing values, denoise the observed values, and quantify the spatial-prediction uncertainties. The computations associated with the SME model are linear scalable to the number of data points, …


A Comparison Of Microarray Analyses: A Mixed Models Approach Versus The Significance Analysis Of Microarrays, Nathan Wallace Stephens Nov 2006

A Comparison Of Microarray Analyses: A Mixed Models Approach Versus The Significance Analysis Of Microarrays, Nathan Wallace Stephens

Theses and Dissertations

DNA microarrays are a relatively new technology for assessing the expression levels of thousands of genes simultaneously. Researchers hope to find genes that are differentially expressed by hybridizing cDNA from known treatment sources with various genes spotted on the microarrays. The large number of tests involved in analyzing microarrays has raised new questions in multiple testing. Several approaches for identifying differentially expressed genes have been proposed. This paper considers two: (1) a mixed models approach, and (2) the Signiffcance Analysis of Microarrays.


Dynamic Modeling And Statistical Analysis Of Event Times, Edsel A. Pena Nov 2006

Dynamic Modeling And Statistical Analysis Of Event Times, Edsel A. Pena

Faculty Publications

This review article provides an overview of recent work in the modeling and analysis of recurrent events arising in engineering, reliability, public health, biomedicine and other areas. Recurrent event modeling possesses unique facets making it different and more difficult to handle than single event settings. For instance, the impact of an increasing number of event occurrences needs to be taken into account, the effects of covariates should be considered, potential association among the interevent times within a unit cannot be ignored, and the effects of performed interventions after each event occurrence need to be factored in. A recent general class …


A Logistic Regression Analysis Of Utah Colleges Exit Poll Response Rates Using Sas Software, Clint W. Stevenson Oct 2006

A Logistic Regression Analysis Of Utah Colleges Exit Poll Response Rates Using Sas Software, Clint W. Stevenson

Theses and Dissertations

In this study I examine voter response at an interview level using a dataset of 7562 voter contacts (including responses and nonresponses) in the 2004 Utah Colleges Exit Poll. In 2004, 4908 of the 7562 voters approached responded to the exit poll for an overall response rate of 65 percent. Logistic regression is used to estimate factors that contribute to a success or failure of each interview attempt. This logistic regression model uses interviewer characteristics, voter characteristics (both respondents and nonrespondents), and exogenous factors as independent variables. Voter characteristics such as race, gender, and age are strongly associated with response. …


Student Fact Book, Fall 2006, Twenty-Ninth Annual Edition, Wright State University, Office Of Student Information Systems, Wright State University Oct 2006

Student Fact Book, Fall 2006, Twenty-Ninth Annual Edition, Wright State University, Office Of Student Information Systems, Wright State University

Wright State University Student Fact Books

The student fact book has general demographic information on all students enrolled at Wright State University for Fall Quarter, 2006.


Multiattribute Acceptance Sampling Plans., Anup Majumdar Dr. Sep 2006

Multiattribute Acceptance Sampling Plans., Anup Majumdar Dr.

Doctoral Theses

Irrespective of the type of product, evaluation of conformity to specified requirements of its quality characteristics is an integral part of quality assurance. Although they form a set of necessary verification activities almost at all stages of production, these activities, known as inspection do not add value to the product on their own and are to be kept at their minimum. The sampling inspection where a portion of a collection of product units is inspected on a set of characteristics with a view to making decision about acceptance or otherwise becomes relevant in this context.The number of elements of the …


Theory Of Effectiveness Measurement, Richard K. Bullock Sep 2006

Theory Of Effectiveness Measurement, Richard K. Bullock

Theses and Dissertations

Effectiveness measures provide decision makers feedback on the impact of deliberate actions and affect critical issues such as allocation of scarce resources, as well as whether to maintain or change existing strategy. Currently, however, there is no formal foundation for formulating effectiveness measures. This research presents a new framework for effectiveness measurement from both a theoretical and practical view. First, accepted effects-based principles, as well as fundamental measurement concepts are combined into a general, domain independent, effectiveness measurement methodology. This is accomplished by defining effectiveness measurement as the difference, or conceptual distance from a given system state to some reference …


Bayesian And Positive Matrix Factorization Approaches To Pollution Source Apportionment, Jeff William Lingwall May 2006

Bayesian And Positive Matrix Factorization Approaches To Pollution Source Apportionment, Jeff William Lingwall

Theses and Dissertations

The use of Positive Matrix Factorization (PMF) in pollution source apportionment (PSA) is examined and illustrated. A study of its settings is conducted in order to optimize them in the context of PSA. The use of a priori information in PMF is examined, in the form of target factor profiles and pulling profile elements to zero. A Bayesian model using lognormal prior distributions for source profiles and source contributions is fit and examined.


Abstracts Of Papers, 84th Annual Meeting Of The Virginia Academy Of Science Apr 2006

Abstracts Of Papers, 84th Annual Meeting Of The Virginia Academy Of Science

Virginia Journal of Science

Full abstracts of papers for the 84th Annual Meeting of the Virginia Academy of Science, May 25-26, 2006, Virginia Polytechnic Institute and State University, Blacksburg, VA


Modeling And Simulation Of Value -At -Risk In The Financial Market Area, Xiangyin Zheng Apr 2006

Modeling And Simulation Of Value -At -Risk In The Financial Market Area, Xiangyin Zheng

Doctoral Dissertations

Value-at-Risk (VaR) is a statistical approach to measure market risk. It is widely used by banks, securities firms, commodity and energy merchants, and other trading organizations. The main focus of this research is measuring and analyzing market risk by modeling and simulation of Value-at-Risk for portfolios in the financial market area. The objectives are (1) predicting possible future loss for a financial portfolio from VaR measurement, and (2) identifying how the distributions of the risk factors affect the distribution of the portfolio. Results from (1) and (2) provide valuable information for portfolio optimization and risk management.

The model systems chosen …


Analyzing Dna Microarrays With Undergraduate Statisticians, Johanna S. Hardin, Laura Hoopes, Ryan Murphy '06 Jan 2006

Analyzing Dna Microarrays With Undergraduate Statisticians, Johanna S. Hardin, Laura Hoopes, Ryan Murphy '06

Pomona Faculty Publications and Research

With advances in technology, biologists have been saddled with high dimensional data that need modern statistical methodology for analysis. DNA microarrays are able to simultaneously measure thousands of genes (and the activity of those genes) in a single sample. Biologists use microarrays to trace connections between pathways or to identify all genes that respond to a signal. The statistical tools we usually teach our undergraduates are inadequate for analyzing thousands of measurements on tens of samples. The project materials include readings on microarrays as well as computer lab activities. The topics covered include image analysis, filtering and normalization techniques, and …