Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Computer Sciences

Institution
Keyword
Publication Year
Publication
Publication Type
File Type

Articles 55771 - 55800 of 57999

Full-Text Articles in Physical Sciences and Mathematics

Parallel Error Tolerance Scheme Based On The Hill Climbing Nature Of Simulated Annealing, Bruce M. Mcmillin, Chul-Eui Hong Jan 1992

Parallel Error Tolerance Scheme Based On The Hill Climbing Nature Of Simulated Annealing, Bruce M. Mcmillin, Chul-Eui Hong

Computer Science Faculty Research & Creative Works

In parallelizing simulated annealing in a multicomputer, maintaining the global state S involves explicit message traffic and is a critical performance bottleneck. One way to mitigate this bottleneck is to amortize the overhead of these state updates over as many parallel state changes as possible. Using this technique introduces errors in the calculated cost C(S) of a particular state S used by the annealing process. Analytically derived bounds are placed on this error in order to assure convergence to the correct result. The resulting parallel simulated annealing algorithm dynamically changes the frequency of global updates as a function of the …


Software Issues And Performance Of A Parallel Model For Stock Option Pricing, Kim Mills, Gang Cheng, Michael Vinson, Sanjay Ranka Jan 1992

Software Issues And Performance Of A Parallel Model For Stock Option Pricing, Kim Mills, Gang Cheng, Michael Vinson, Sanjay Ranka

Northeast Parallel Architecture Center

The finance industry is beginning to adopt parallel computing for numerical computation, and will soon be in a position to use parallel supercomputers. This paper examines software issues and performance of a stock option pricing model running on the Connection Machine-2 and DECmpp-12000. Pricing models incorporating stochastic volatility with American call (early exercise) are computationally intensive and require substantial communication. Three parallel versions of a stock option pricing model were developed which varied in data distribution, load balancing, and communication. The performance of this set of increasingly refined models ranged over no improvement, 10 times, and 100 times faster than …


Spede: Simple Programming Environment For Distributed Execution, James Gochee Jan 1992

Spede: Simple Programming Environment For Distributed Execution, James Gochee

Dartmouth College Undergraduate Theses

One of the main goals for people who use computer systems, particularly computational scientists, is speed. In the quest for ways to make applications run faster, engineers have developed parallel computers, which use more than one CPU to solve a task. However, many institutions already posses significant computational power in networks of workstations. Through software, it is possible to glue together clusters of machines to simulate a parallel environment. SPEDE is one such system, designed to place the potential of local machines at the fingertips of the programmer. Through a simple interface, users design computational objects that can be linked …


Spede: A Simple Programming Environment For Distributed Execution (Users' Manual), James Gochee Jan 1992

Spede: A Simple Programming Environment For Distributed Execution (Users' Manual), James Gochee

Dartmouth College Undergraduate Theses

Traditional single processor computers are quickly reaching their full computational potentials. The quest for faster and faster chips have brought technology to the point where the laws of physics are hampering future gains. Significant gains in speed must therefore come from using multiple processors instead of a single processor. This technology usually represents itself in the form of a parallel computer, such as the Connection Machine Model 5. Recently however, much interest has been focused on software that organizes single processor computers to behave like a parallel computer. This is desirable for sites which have large installations of workstations, since …


A Case For The Use Of Application Generators In The Creation Of Software For The Hotel Industry, Peter O'Connor, Ciaran Mcdonnell Jan 1992

A Case For The Use Of Application Generators In The Creation Of Software For The Hotel Industry, Peter O'Connor, Ciaran Mcdonnell

Conference papers

The article makes the case for the use of Program Generators in producing software for the Hotel and Catering Industry


Some Developments In Information Technology In The Irish Hotel And Catering Industry, Sean Connell, Elaine Sunderland, Ciaran Mcdonnell Jan 1992

Some Developments In Information Technology In The Irish Hotel And Catering Industry, Sean Connell, Elaine Sunderland, Ciaran Mcdonnell

Conference papers

This paper describes the current and potential future use of computers in the Hospitality Industry in Ireland. It briefly outlines two research projects which are being carried out in the Dublin College of Catering in the application of computers to the Industry.


Distributed Ray Casting For High-Speed Volume Rendering, Patricia L. Brightbill Jan 1992

Distributed Ray Casting For High-Speed Volume Rendering, Patricia L. Brightbill

Theses and Dissertations

The volume rendering technique known as ray casting or ray tracing is notoriously slow for large volume sizes, yet provides superior images. A technique is needed to accelerate ray tracing volumes without depending on special purpose or parallel computers. The realization and improvements in distributed computing over the past two decades has motivated its use in this work. This thesis explores a technique to speedup ray casting by distributed programming. The work investigates the possibility of dividing the volume among general purpose workstations and casting rays (using Levoy's front-to-back algorithm) through each subvolume independently. The final step being the composition …


Scheduling Regular And Irregular Communication Patterns On The Cm-5, Ravi Ponnusamy, Rajeev Thakur, Alok Choudhary, Geoffrey C. Fox Jan 1992

Scheduling Regular And Irregular Communication Patterns On The Cm-5, Ravi Ponnusamy, Rajeev Thakur, Alok Choudhary, Geoffrey C. Fox

Northeast Parallel Architecture Center

In this paper, we study the communication characteristics of the CM-5 and the performance effects of scheduling regular and irregular communication patterns on the CM-5. We consider the scheduling of regular communication patterns such as complete exchange and broadcast. We have implemented four algorithms for complete exchange and studied their performances on a 2D FFT algorithm. We have also implemented four algorithms for scheduling irregular communication patterns and studied their performance on the communication patterns of several synthetic as well as real problems such as the conjugate gradient solver and the Euler solver.


Software Support For Irregular And Loosely Synchronous Problems, Alok Choudhary, Geoffrey C. Fox, Sanja Ranka, Seema Hiranandani Jan 1992

Software Support For Irregular And Loosely Synchronous Problems, Alok Choudhary, Geoffrey C. Fox, Sanja Ranka, Seema Hiranandani

Northeast Parallel Architecture Center

A large class of scientific and engineering applications may be classified as irregular and loosely synchronous from the perspective of parallel processing. We present a partial classification of such problems. This classification has motivated us to enhance Fortran D to provide language support for irregular, loosely synchronous problems. We present techniques for parallelization of such problems in the context of Fortran D.


Lessons From Massively Parallel Applications On Message Passing Computers, Geoffrey C. Fox Jan 1992

Lessons From Massively Parallel Applications On Message Passing Computers, Geoffrey C. Fox

Northeast Parallel Architecture Center

We review a decade's work on message passing MIMD parallel computers in the areas of hardware, software and applications. We conclude that distributed memory parallel computing works, and describe the implications of this for future portable software systems.


A Large Scale Comparison Of Option Pricing Models With Historical Market Data, Kim Mills, Michael Vinson, Gang Cheng Jan 1992

A Large Scale Comparison Of Option Pricing Models With Historical Market Data, Kim Mills, Michael Vinson, Gang Cheng

Northeast Parallel Architecture Center

A set of stock option pricing models are implemented on the Connection Machine-2 and the DECmpp-12000 to compare model prices and historical market data. Improved models, which incorporate stochastic volatility with American call generally have smaller pricing errors than simpler models which are based on constant volatility and European call. In a refinement of the comparison between model and market prices, a figure of merit based on the bid/ask spread in the market, and the use of optimization techniques for model parameter estimation, are evaluated. Optimization appears to hold great promise for improving the accuracy of existing pricing models, especially …


Compiling Distribution Directives In A Fortran 90d Compiler, Zeki Bozkus, Alok Choudhary, Geoffrey C. Fox, Sanjay Ranka Jan 1992

Compiling Distribution Directives In A Fortran 90d Compiler, Zeki Bozkus, Alok Choudhary, Geoffrey C. Fox, Sanjay Ranka

Northeast Parallel Architecture Center

Data Partitioning and mapping is one of the most important steps of in writing a parallel program; especially data parallel one. Recently, Fortran D, and subsequently, High Performance Fortran (HPF) have been proposed to allow users to specify data distributions and alignments for arrays in programs. This paper presents the design of a Fortran 90D compiler that takes a Fortran 90D program as input and produces a node program + message passing calls for distributed memory machines. Specifically, we present the design of the Data Partitioning Module that processes the alignment and distribution directives and illustrate what are the important …


Which Applications Can Use High Performance Fortran And Fortran-D: Industry Standard Data Parallel Languages?, Alok Choudhary, Geoffrey C. Fox, Tomasz Haupt, S. Ranka Jan 1992

Which Applications Can Use High Performance Fortran And Fortran-D: Industry Standard Data Parallel Languages?, Alok Choudhary, Geoffrey C. Fox, Tomasz Haupt, S. Ranka

Northeast Parallel Architecture Center

In this paper, we present the first, preliminary results of HPF/Fortran-D language analysis based on compiling and running benchmark applications using a prototype implementation of HPF/Fortran-D compiler. The analysis indicate that the HPF is a very convenient tool for programming many applications on massively parallel and/or distributed systems. In addition, we cumulate experience on how to parallelize irregular problems to extend the scope of Fortran-D beyond HPF and suggest future extensions to the Fortran standard.


A Prototype Document Image Analysis System For Technical Journals, George Nagy, Sharad C. Seth, Mahesh Viswanathan Jan 1992

A Prototype Document Image Analysis System For Technical Journals, George Nagy, Sharad C. Seth, Mahesh Viswanathan

School of Computing: Faculty Publications

Intelligent document segmentation can bring electronic browsing within the reach of most users. The authors show how this is achieved through document processing, analysis, and parsing the graphic sentence.


Fractal (Reconstructive Analogue) Memory, David J. Stucki, Jordan B. Pollack Jan 1992

Fractal (Reconstructive Analogue) Memory, David J. Stucki, Jordan B. Pollack

Mathematics Faculty Scholarship

This paper proposes a new approach to mental imagery that has the potential for resolving an old debate. We show that the methods by which fractals emerge from dynamical systems provide a natural computational framework for the relationship between the “deep” representations of long-term visual memory and the “surface” representations of the visual array, a distinction which was proposed by (Kosslyn, 1980). The concept of an iterated function system (IFS) as a highly compressed representation for a complex topological set of points in a metric space (Barnsley, 1988) is embedded in a connectionist model for mental imagery tasks. Two advantages …


A Comparison Of Text And Graphics As Effective User Interfaces In A Proposed Advanced Train Control System, Kenneth A. Organes Jan 1992

A Comparison Of Text And Graphics As Effective User Interfaces In A Proposed Advanced Train Control System, Kenneth A. Organes

UNF Graduate Theses and Dissertations

This study tested the effectiveness of three screen interface formats for use in a proposed on-board computer to be used as part of an advanced train control system. One of the three formats contained only graphic depictions of the data, one contained only textual representations and the third had a mixture of graphic and textual data.

Twenty-eight subjects with varying railroad experience were recruited for the experiment. They were divided into three groups, each of which was presented with a simulation using one of the three interface types. The simulations depicted possible engine, track and train conditions that might be …


Fault-Tolerant Concurrent Branch And Bound Algorithms Derived From Program Verification, Hanan Lutfiyya, Aggie Sun, Bruce M. Mcmillin Jan 1992

Fault-Tolerant Concurrent Branch And Bound Algorithms Derived From Program Verification, Hanan Lutfiyya, Aggie Sun, Bruce M. Mcmillin

Computer Science Faculty Research & Creative Works

An important aspect which is often overlooked in software design of distributed environments is that of fault tolerance. Many methodologies in the past have attempted to provide fault tolerance efficiently but have never been successful at eliminating explicit time and space redundancy. One approach for providing fault tolerance is through examining the behavior and properties of the application and deriving executable assertions that detect faults. Our work focuses on transforming the assertions of a verification proof of a program to executable assertions. These executable assertions may be embedded in the program to create a fault-tolerant program. It is also shown …


Electronic Branching Ratio Of The Lepton, R. Ammar, Manoj Thulasidas Jan 1992

Electronic Branching Ratio Of The Lepton, R. Ammar, Manoj Thulasidas

Research Collection School Of Computing and Information Systems

Using data accumulated by the CLEO I detector operating at the Cornell Electron Storage Ring, we have measured the ratio R=Γ(τ→eν¯eντ)Γ1 where Γ1 is the τ decay rate to final states with one charged particle. We find R=0.2231±0.0044±0.0073 where the first error is statistical and the second is systematic. Together with the measured topological one-charged-particle branching fraction, this yields the branching fraction of the τ lepton to electrons, Be=0.192±0.004±0.006.


Measurement Of The Tau Lepton Electronic Branching Fraction, D. Akerib, M. Thulasidas Jan 1992

Measurement Of The Tau Lepton Electronic Branching Fraction, D. Akerib, M. Thulasidas

Research Collection School Of Computing and Information Systems

The tau lepton electron branching fraction has been measured with the CLEO II detector at the Cornell Electron Storage Ring as Be=0.1749±0.0014±0.0022, with the first error statistical and the second systematic. The measurement involves counting electron-positron annihilation events in which both taus decay to electrons, and normalizing to the number of tau-pair decays expected from the measured luminosity. Detected photons in these events constitute a definitive observation of tau decay radiation.


Constrained Completion: Theory, Implementation, And Results, Daniel Patrick Murphy Jan 1992

Constrained Completion: Theory, Implementation, And Results, Daniel Patrick Murphy

Doctoral Dissertations

"The Knuth-Bendix completion procedure produces complete sets of reductions but can not handle certain rewrite rules such as commutativity. In order to handle such theories, completion procedure were created to find complete sets of reductions modulo an equational theory. The major problem with this method is that it requires a specialized unification algorithm for the equational theory. Although this method works well when such an algorithm exists, these algorithms are not always available and thus alternative methods are needed to attack problems. A way of doing this is to use a completion procedure which finds complete sets of constrained reductions. …


System Design Quality And Efficiency Of System Analysts: An Automated Case Tool Versus A Manual Method, Satomi H. Sugishita Jan 1992

System Design Quality And Efficiency Of System Analysts: An Automated Case Tool Versus A Manual Method, Satomi H. Sugishita

UNF Graduate Theses and Dissertations

The purpose of the current research study is to find out if CASE tools help to increase the software design quality and efficiency of system analysts and designers when they modify a system design document. Results of the experimental data analysis show that only the experience level of subjects had an effect on quality of their work. Results indicated that the design methods, either CASE tools or manual, do not have a significant effect on quality of the modification task nor the efficiency of system analysts and designers.


The Group Spreadsheet, George Francis Morrissey Jr. Jan 1992

The Group Spreadsheet, George Francis Morrissey Jr.

UNF Graduate Theses and Dissertations

Groupware is fast becoming an important part of the computing world. This thesis reviews past history and research in which a group oriented spreadsheet is shown to have a real purpose in today's business world. A group oriented spreadsheet was implemented using a public domain package called The Spreadsheet Calculator. Spreadsheet users and programmers tested the implementation. The results and conclusions of this implementation are also presented.


Towards The Integration Of Object-Oriented Constructs Within Structured Query Language (Sql), Paul Francis Rabuck Jan 1992

Towards The Integration Of Object-Oriented Constructs Within Structured Query Language (Sql), Paul Francis Rabuck

UNF Graduate Theses and Dissertations

This paper explores the possibility of coupling SQL with a semantic data model. For this study, the primary objective was to build a working prototype of a program that allows a database designer to define data objects and their respective interrelationships using the Object-oriented Semantic Association Model (OSAM*).

The prototype isolates from the designer the low level commands (i.e., CREATE TABLE, CREATE INDEX) which comprise the SQL data definition language (DOL). Once the objects are defined by the designer, the prototype generates the relational database table definitions without the designer having to directly use the SQL DOL.


Wright State University College Of Engineering And Computer Science Bits And Pcs Newsletter, Volume 8, Number 1, January 1992, College Of Engineering And Computer Science, Wright State University Jan 1992

Wright State University College Of Engineering And Computer Science Bits And Pcs Newsletter, Volume 8, Number 1, January 1992, College Of Engineering And Computer Science, Wright State University

BITs and PCs Newsletter

A ten page newsletter created by the Wright State University College of Engineering and Computer Science that addresses the current affairs of the college.


Applying Metrics To Rule-Based Systems, Paul Doyle, Renaat Verbruggen Jan 1992

Applying Metrics To Rule-Based Systems, Paul Doyle, Renaat Verbruggen

Other

Since the introduction of software measurement theory in the early seventies it has been accepted that in order to control software it must first be measured. Unambiguous and reproducible measurements are considered to be the most useful in controlling software productivity, costs and quality, and diverse sets of measurements are required to cover all aspects of software. This paper focuses on measures for rule-based language systems and also describes a process for developing measures for other non-standard 3GL development tools. This paper uses KEL as an example and the method allows the re-use of existing measures and indicates if and …


Global Domination Of Factors Of A Graph, Julie R. Carrington Jan 1992

Global Domination Of Factors Of A Graph, Julie R. Carrington

Retrospective Theses and Dissertations

A factoring of a graph G = (V, E) is a collection of spanning subgraphs F1, F2, ... , Fk, known as factors into which the edge set E has been partitioned. A dominating set of a graph is a set of nodes such that every node in the graph is either contained in the set or has an edge to some node in the set. Each factor Fi is itself a graph and so has a dominating set. This set is called a local dominating set or LDS. An LDS of minimum …


On Hamiltonian Line Graphs, Zhi-Hong Chen Jan 1992

On Hamiltonian Line Graphs, Zhi-Hong Chen

Scholarship and Professional Work - LAS

No abstract provided.


Porting The Chorus Supervisor And Related Low-Level Functions To The Pa-Risc, Ravi Konuru, Marion Hakanson, Jon Inouye, Jonathan Walpole Jan 1992

Porting The Chorus Supervisor And Related Low-Level Functions To The Pa-Risc, Ravi Konuru, Marion Hakanson, Jon Inouye, Jonathan Walpole

Computer Science Faculty Publications and Presentations

This document is part of a series of reports describing the design decisions made in porting the Chorus Operating System to the Hewlett-Packard 9000 Series 800 workstation.

The Supervisor is the name given by Chorus to a collection of low-level functions that are machine dependent and have to be implemented when Chorus is ported from one machine to another. The Supervisor is responsible for interrupt, trap and exception handling, managing low-level thread initialization, context switch, kernel initialization, managing simple devices (timer and console) and offering a low-level debugger. This document describes the port of the Supervisor and related low-level functions. …


[Introduction To] The Vax Book: An Introduction, John R. Hubbard Jan 1992

[Introduction To] The Vax Book: An Introduction, John R. Hubbard

Bookshelf

This book is an expansion of the book, A Gentle Introduction to the Vax System. The purpose of the book is to guide the novice, step-by-step, through the initial stages of learning to use the Digital Equipment Corporation's Vax computers, running under the VMS operating system (Version 5.0 or later). As a tutorial for beginners, this book assumes no previous experience with computers.


Instructional Use Of Computers For Entry-Level Physical Therapy Education In The United States, Edmund M. Kosmahl Jan 1992

Instructional Use Of Computers For Entry-Level Physical Therapy Education In The United States, Edmund M. Kosmahl

CCE Theses and Dissertations

Little was known about the value of computer assisted instruction (CAl) for entry-level physical therapy (PT) education. Factors that affect implementation and use of CAl in entry-level PT education had not been identified. Because of this paucity of information, decision-making about the implementation and use of CAl in entry-level PT education had been hampered.

This study used mail questionnaire survey methods to find:

  1. The extent of use of computer-assisted instruction (CAl) in entry-level physical therapy (PT) education.
  2. The perceived value of CAl compared to more traditional instructional methods for entry-level PT education.
  3. What factors affect implementation and use of CAl …