Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Computer Sciences

Institution
Keyword
Publication Year
Publication
Publication Type
File Type

Articles 54661 - 54690 of 58034

Full-Text Articles in Physical Sciences and Mathematics

Runtime And Language Support For Compiling Adaptive Irregular Programs On Distributed Memory Machines, Yuan-Shin Hwang, Bongki Moon, Shamik D. Sharma, Ravi Ponnusamy Jan 1995

Runtime And Language Support For Compiling Adaptive Irregular Programs On Distributed Memory Machines, Yuan-Shin Hwang, Bongki Moon, Shamik D. Sharma, Ravi Ponnusamy

Northeast Parallel Architecture Center

In many scientific applications, arrays containing data are indirectly indexed through indirection arrays. Such scientific applications are called irregular programs and are a distinct class of applications that require special techniques for parallelization. This paper presents a library called CHAOS, which helps users implement irregular programs on distributed-memory message-passing machines, such as the Paragon, Delta, CM-5 and SP-1. The CHAOS library provides efficient runtime primitives for distributing data and computation over processors; it supports efficient index translation mechanisms and provides users high-level mechanisms for optimizing communication. CHAOS subsumes the previous PARTI library and supports a larger class of applications. In …


Cluster Computing Review, Mark Baker, Geoffrey C. Fox, Hon W. Yau Jan 1995

Cluster Computing Review, Mark Baker, Geoffrey C. Fox, Hon W. Yau

Northeast Parallel Architecture Center

In the past decade there has been a dramatic shift from mainframe or ‘host−centric’ computing to a distributed ‘client−server’ approach. In the next few years this trend is likely to continue with further shifts towards ‘network−centric’ computing becoming apparent. All these trends were set in motion by the invention of the mass−reproducible microprocessor by Ted Hoff of Intel some twenty−odd years ago. The present generation of RISC microprocessors are now more than a match for mainframes in terms of cost and performance. The long−foreseen day when collections of RISC microprocessors assembled together as a parallel computer could out perform the …


Parallel Remapping Algorithms For Adaptive Problems, Chao Wei Ou, Sanjay Ranka Jan 1995

Parallel Remapping Algorithms For Adaptive Problems, Chao Wei Ou, Sanjay Ranka

Northeast Parallel Architecture Center

In this paper we present fast parallel algorithms for remapping a class of irregular and adaptive problems on coarse-grained distributed memory machines. We show that the remapping of these applications, using simple index-based mapping algorithm, can be reduced to sorting a nearly sorted list of integers or merging an unsorted list of integers with a sorted list of integers. By using the algorithms we have developed, the remapping of these problems can be achieved at a fraction of the cost of mapping from scratch. Experimental results are presented on the CM-5.


High Performance Distributed Computing, Geoffrey C. Fox Jan 1995

High Performance Distributed Computing, Geoffrey C. Fox

Northeast Parallel Architecture Center

High Performance Distributed Computing (HPDC) is driven by the rapid advance of two related technologies -- those underlying computing and communications, respectively. These technology pushes are linked to application pulls, which vary from the use of a cluster of some 20 workstations simulating fluid flow around an aircraft, to the complex linkage of several hundred million advanced PCs around the globe to deliver and receive multimedia information. The review of base technologies and exemplar applications is followed by a brief discussion of software models for HPDC, which are illustrated by two extremes -- PVM and the conjectured future World Wide …


Software Tool Evaluation Methodology, Salim Hariri, Sung Yong Park, Rajashekar Reddy, Mahesh Subramanyan Jan 1995

Software Tool Evaluation Methodology, Salim Hariri, Sung Yong Park, Rajashekar Reddy, Mahesh Subramanyan

Northeast Parallel Architecture Center

The recent development of parallel and distributed computing software has introduced a variety of software tools that support several programming paradigms and languages. This variety of tools makes the selection of the best tool to run a given class of applications on a parallel or distributed system a non-trivial task that requires some investigation. We expect tool evaluation to receive more attention as the deployment and usage of distributed systems increases. In this paper, we present a multi-level evaluation methodology for parallel/distributed tools in which tools are evaluated from different perspectives. We apply our evaluation methodology to three message passing …


Communication Strategies For Out-Of-Core Programs On Distributed Memory Machines, Rajesh Bordawekar, Alok Choudhary Jan 1995

Communication Strategies For Out-Of-Core Programs On Distributed Memory Machines, Rajesh Bordawekar, Alok Choudhary

Northeast Parallel Architecture Center

In this paper, we show that communication in the out-of-core distributed memory problems requires both inter-processor communication and file I/O. Given that primary data structures reside in files, even communication requires I/O. Thus, it is important to optimize the I/O costs associated with a communication step. We present three methods for performing communication in out-of-core distributed memory problems. The first method, termed as the “out-of-core“communication method, follows a loosely synchronous model. Computation and Communication phases in this case are clearly separated, and communication requires permutation of data in files. The second method, termed as”demand-driven-in-core communication” considers only communication required of …


High Performance Fortran And Possible Extensions To Support Conjugate Gradient Algorithms, K. Dincer, Ken Hawick, Alok Choudhary, Geoffrey C. Fox Jan 1995

High Performance Fortran And Possible Extensions To Support Conjugate Gradient Algorithms, K. Dincer, Ken Hawick, Alok Choudhary, Geoffrey C. Fox

Northeast Parallel Architecture Center

We evaluate the High-Performance Fortran (HPF) language for the compact expression and efficient implementation of conjugate gradient iterative matrix-solvers on High Performance Computing and Communications(HPCC) platforms. We discuss the use of intrinsic functions, data distribution directives and explicitly parallel constructs to optimize performance by minimizing communications requirements in a portable manner. We focus on implementations using the existing HPF definitions but also discuss issues arising that may influence a revised definition for HPF-2. Some of the codes discussed are available on the World Wide Web at http://www.npac.syr.edu/hpfa/ alongwith other educational and discussion material related to applications in HPF.


Exploiting High Performance Fortran For Computational Fluid Dynamics, Volume 919, Ken Hawick, Geoffrey C. Fox Jan 1995

Exploiting High Performance Fortran For Computational Fluid Dynamics, Volume 919, Ken Hawick, Geoffrey C. Fox

Northeast Parallel Architecture Center

We discuss the High Performance Fortran data parallel programming language as an aid to software engineering and as a tool for exploiting High Performance Computing systems for computational uid dynamics applications. We discuss the use of intrinsic functions, data distribution directives and explicitly parallel constructs to optimize performance by minimizing communications requirements in a portable manner. In particular we use an implicit method such as the ADI algorithm to illustrate the major issues. We focus on regular mesh problems, since these can be efficiently represented by the existing HPF definition, but also discuss issues arising from the use of irregular …


The Use Of The National Information Infrastructure And High Performance Computers In Industry, Geoffrey C. Fox, Wojtek Furmanski Jan 1995

The Use Of The National Information Infrastructure And High Performance Computers In Industry, Geoffrey C. Fox, Wojtek Furmanski

Northeast Parallel Architecture Center

We divide potential NII (National Information Infrastructure) services into five broad areas: Collaboration and televirtuality; InfoVISiON (Information, Video, Imagery, and Simulation on Demand), and digital libraries; commerce; metacomputing; WebTop productivity services. The latter denotes the broad suite of tools we expect to be offered on the Web in a general environment we term WebWindows. We review current and future World Wide Web technologies, which could underlie these services. In particular, we suggest an integration framework WebWork for High Performance (parallel and distributed) computing and the NII. We point out that pervasive WebWork and WebWindows technologies will enable, facilitate and substantially …


Supporting Irregular Distributions Using Data-Parallel Languages, Ravi Ponnusamy, Yuan-Shin Hwang, Raja Das, Alok Choudhary, Geoffrey Fox Jan 1995

Supporting Irregular Distributions Using Data-Parallel Languages, Ravi Ponnusamy, Yuan-Shin Hwang, Raja Das, Alok Choudhary, Geoffrey Fox

Northeast Parallel Architecture Center

Languages such as Fortran D provide irregular distribution schemes that can efficiently support irregular problems. Irregular distributions can also be emulated in HPF. Compilers can incorporate runtime procedures to automatically support these distributions.


Basic Issues And Current Status Of Parallel Computing -- 1995, Geoffrey C. Fox Jan 1995

Basic Issues And Current Status Of Parallel Computing -- 1995, Geoffrey C. Fox

Northeast Parallel Architecture Center

The best enterprises have both a compelling need pulling them forward and an innovative technological solution pushing them on. In high-performance computing, we have the need for increased computational power in many applications and the inevitable long-term solution is massive parallelism. In the short term, the relation between pull and push may seem unclear as novel algorithms and software are needed to support parallel computing. However, eventually parallelism will be present in all computers -- including those in your children's video game, your personal computer or workstation, and the central supercomputer.


Liability Issues In The Development Of Electronic Chart Display Information Systems, Daniel R. Martin Jan 1995

Liability Issues In The Development Of Electronic Chart Display Information Systems, Daniel R. Martin

Marine Affairs Theses and Major Papers

The Electronic Chart Display Information System (ECDIS) is a new and evolving aid to navigation. Proponents claim ECDIS will help navigators to synthesize previously disparate information and result in safer navigation. The technology to implement ECDIS already exists; the major hurdle the maritime community faces is the legal uncertainties associated with ECDIS. This paper investigates the potential legal impact ECDIS would have on the government, shipowners, mariners, and manufacturers, and evaluates current international efforts to promote ECDIS. Through a detailed analysis of admiralty and aeronautical case history, it is evident that; 1) generation of electronic nautical charts can post a …


Quantitative Object Motion Prediction By An Art2 And Madaline Combined Neural Network: Concepts And Experiments, Qiuming Zhu, Ahmed Y. Tawfik Jan 1995

Quantitative Object Motion Prediction By An Art2 And Madaline Combined Neural Network: Concepts And Experiments, Qiuming Zhu, Ahmed Y. Tawfik

Computer Science Faculty Publications

An ART2 and a Madaline combined neural network is applied to predicting object motions in dynamic environments. The ART2 network extracts a set of coherent patterns of the object motion by its self-organizing and unsupervised learning features. The identified patterns are directed to the Madaline network to generate a quantitative prediction of the future motion states. The method does not require any presumption of the mathematical models, and is applicable to a variety of situations.


A Knowledge-Based Approach To Class Scheduling, Mara Zell Jan 1995

A Knowledge-Based Approach To Class Scheduling, Mara Zell

Honors Theses, 1963-2015

A class scheduling application was developed to assist department chairs in producing class schedules each semester. This was accomplished using a knowledge-based system. The system utilized the many constraints involved in the class scheduling process to solve the problem. This application was developed and implemented in an object oriented package called Powerbuilder. Thus, the application is windows based with point and click features. Three trial schedules were produced. These results demonstrate the ability of the application to schedule three types of classes: classes without labs, classes with one lab, and classes with two labs. The end result is that an …


Ensuring The Satisfaction Of A Temporal Specification At Run-Time, Grace Tsai, Matt Insall, Bruce M. Mcmillin Jan 1995

Ensuring The Satisfaction Of A Temporal Specification At Run-Time, Grace Tsai, Matt Insall, Bruce M. Mcmillin

Mathematics and Statistics Faculty Research & Creative Works

A responsive computing system is a hybrid of real-time, distributed and fault-tolerant systems. In such a system, severe consequences can occur if the run-time behavior does not conform to the expected behavior or specifications. In this paper, we present a formal approach to ensure satisfaction of the specifications in the operational environment as follows. First we specify behavior of the systems using Interval Temporal Logic (ITL). Next we give algorithms for trace checking of programs in such systems. Finally, we present a fully distributed run-time evaluation system which causally orders the events of the system during its execution and checks …


An Efficient Multicomputer Algorithm For The Solution Of Chemical Process Flowsheeting Equations, Fikret Ercal, Neil L. Book, S. Pait, J. J. Fielding Jan 1995

An Efficient Multicomputer Algorithm For The Solution Of Chemical Process Flowsheeting Equations, Fikret Ercal, Neil L. Book, S. Pait, J. J. Fielding

Computer Science Faculty Research & Creative Works

This paper presents a parallel solution method of large sparse systems of linear equations arising in the context of a chemical process flowsheeting application on a message passing multicomputer. To maximize the performance, the algorithm uses a novel matrix decomposition and solution method, called parallel two-phased LU decomposition, which schedules the concurrent tasks in a maximally overlapping manner, and at the same time, tries to minimize the interprocessor data dependencies and obtain optimal load balancing. The forward elimination step is performed concurrently with the parallel two-phased LU decomposition step and backward substitution is parallelized in a piecewise manner. Implementation results …


Improved Tau Polarisation Measurement, D. Buskulic, M. Thulasidas Jan 1995

Improved Tau Polarisation Measurement, D. Buskulic, M. Thulasidas

Research Collection School Of Computing and Information Systems

Using 22 pb−1 of data collected at LEP in 1992 on the peak of the Z resonance, the ALEPH collaboration has measured the polarisation of the tau leptons decaying intoevv¯,μvv¯evv¯,μvv¯,πν, ρν and a1 ν from their individual decay product distributions. The measurement of the tau polarisation as a function of the production polar angle yields the two parametersN τ andN e, where, in terms of the axial and vector couplingsg Al andg Vl,N l=2g Vl gAl/(g 2Vl+g2Al). This analysis follows to a large extent the methods devised for the 1990 and 1991 data but with improvements which bring a better …


First Measurement Of The Quark-To-Photon Fragmentation Function, D. Buskulic, Manoj Thulasidas Jan 1995

First Measurement Of The Quark-To-Photon Fragmentation Function, D. Buskulic, Manoj Thulasidas

Research Collection School Of Computing and Information Systems

Earlier measurements at LEP of isolated hard photons in hadronic Z decays, attributed to radiation from primary quark pairs, have been extended in the ALEPH experiment to include hard photon production inside hadron jets. Events are selected where all particles combine democratically to form hadron jets, one of which contains a photon with a fractional energy z 0:7. After statistical subtraction of non-prompt photons, the quark-to-photon fragmentation function, D(z), is extracted directly from the measured 2-jet rate. By taking into account the perturbative contributions to D(z) obtained from an O(S ) QCD calculation, the unknown non-perturbative component of D(z) is …


Inclusive Production Of Neutral Vector Mesons In Hadronic Z Decays, D. Buskulic, Manoj Thulasidas Jan 1995

Inclusive Production Of Neutral Vector Mesons In Hadronic Z Decays, D. Buskulic, Manoj Thulasidas

Research Collection School Of Computing and Information Systems

Data on the inclusive production of the neutral vector mesonsρ 0(770),ω(782), K*0(892), andφ(1020) in hadronic Z decays recorded with the ALEPH detector at LEP are presented and compared to Monte Carlo model predictions. Bose-Einstein effects are found to be important in extracting a reliable value for theρ 0 production rate. An averageρ 0 multiplicity of 1.45±0.21 per event is obtained. Theω is detected via its three pion decay modeω→π + π − π 0 and has a total rate of 1.07±0.14 per event. The multiplicity of the K*0 is 0.83±0.09, whilst that of theφ is 0.122±0.009, both measured using their …


Multiple Query Optimization With Depth-First Branch-And-Bound And Dynamic Query Ordering, Ee Peng Lim, Ahmet Cosar, Jaideep Srivastava Jan 1995

Multiple Query Optimization With Depth-First Branch-And-Bound And Dynamic Query Ordering, Ee Peng Lim, Ahmet Cosar, Jaideep Srivastava

Research Collection School Of Computing and Information Systems

In certain database applications such as deductive databases, batch query processing, and recursive query processing etc., usually a single query gets transformed into a set of closely related database queries. Also, great benefits can be obtained by executing a group of related queries all together in a single unified multi-plan instead of executing each query separately. In order to achieve this Multiple Query Optimization (MQO) identifies common task(s) (e.g. common subexpressions, joins, etc.) among a set of query plans and creates a single unified plan (multi-plan) which can be executed to obtain the required outputs for all queries at once. …


Experimenting With The Finite Element Method In The Calculation Of Radiosity Form Factors, Donna Marie Chesteen Jan 1995

Experimenting With The Finite Element Method In The Calculation Of Radiosity Form Factors, Donna Marie Chesteen

Retrospective Theses and Dissertations

Radiosity has been used to create some of the most photorealistic computer-generated images to date. The problem, however, is that radiosity algorithms are so computationally and memory expensive that few applications can employ them successfully. Form factor calculation is the most costly part of the process. This report describes an algorithm for using the finite element method to reduce the amount of time that is used in the form factor calculation portion of the radiosity algorithm. This technique for form factor calculation significantly reduces the number of projections done at each iteration by using shape functions to determine the distribution …


A Portable Computer System For Recording Heart Sounds And Data Modeling Using A Backpropagation Neural Network, Erik Mark Hudson Jan 1995

A Portable Computer System For Recording Heart Sounds And Data Modeling Using A Backpropagation Neural Network, Erik Mark Hudson

UNF Graduate Theses and Dissertations

Cardiac auscultation is the primary tool used by cardiologists to diagnose heart problems. Although effective, auscultation is limited by the effectiveness of human hearing. Digital sound technology and the pattern classification ability of neural networks may offer improvements in this area. Digital sound technology is now widely available on personal computers in the form of sound cards. A good deal of research over the last fifteen years has shown that neural networks can excel in diagnostic problem solving. To date, most research involving cardiology and neural networks has focussed on ECG pattern classification. This thesis explores the prospects of recording …


Interaction And Interdependency Of Software Engineering Methods And Visual Programming, Robert A. Touchton Jan 1995

Interaction And Interdependency Of Software Engineering Methods And Visual Programming, Robert A. Touchton

UNF Graduate Theses and Dissertations

Visual Programming Languages and Visual Programming Tools incorporate non-procedural coding mechanisms that may duplicate, or perhaps even conflict with, the analysis and design mechanisms promulgated by the mainstream Software Engineering methodologies. By better understanding such duplication and conflict, software engineers can take proactive measures to accommodate and, ideally, eliminate them. Better still, there may be opportunities for synergy that can be exploited if one is looking for them.

This research explored, documented and classified the interactions and interdependencies, both positive (synergies) and negative (conflicts), between two closely related and rapidly evolving Computer Science subdisciplines: software engineering and visual programming. A …


Patentability Of Computer Inventions, Robert C. F. Perez Jan 1995

Patentability Of Computer Inventions, Robert C. F. Perez

Dissertations, Theses, and Masters Projects

No abstract provided.


A Taxonomy Of Workgroup Computing Applications, Warren Von Worley Jan 1995

A Taxonomy Of Workgroup Computing Applications, Warren Von Worley

CCE Theses and Dissertations

The goal of workgroup computing is to help individuals and groups efficiently perform a wide range of functions on networked computer systems (Ellis, Gibbs, & Rein, 1991). Early workgroup computing tools were designed for limited functionality and group interaction (Craighill, 1992). Current workgroup computing applications do not allow enough control of group processes and they provide little correlation between various workgroup computing application areas (Rodden and Blair, 1991). An integrated common architecture may produce more effective workgroup computing applications. Integrating common support functions into a common framework will avoid duplication of these functions for each workgroup computing application (Pastor & …


Hardware Assists For High Performance Computing Using A Mathematics Of Arrays, Hardy J. Pottinger, W. Eatherton, J. Kelly, T. Schiefelbein, Lenore Mullin, R. Ziegler Jan 1995

Hardware Assists For High Performance Computing Using A Mathematics Of Arrays, Hardy J. Pottinger, W. Eatherton, J. Kelly, T. Schiefelbein, Lenore Mullin, R. Ziegler

Electrical and Computer Engineering Faculty Research & Creative Works

Work in progress at the University of Missouri-Rolla on hardware assists for high performance computing is presented. This research consists of a novel field programmable gate array (FPGA) based reconfigurable coprocessor board (the Chameleon Coprocessor) being used to evaluate hardware architectures for speedup of array computation algorithms. These algorithms are developed using a Mathematics of Arrays (MOA). They provide a means to generate addresses for data transfers that require less data movement than more traditional algorithms. In this manner, the address generation algorithms are acting as an intelligent data prefetching mechanism or special purpose cache controller. Software implementations have been …


Shape Reconstruction From Shading Using Linear Approximation, Ping Sing Tsai Jan 1995

Shape Reconstruction From Shading Using Linear Approximation, Ping Sing Tsai

Retrospective Theses and Dissertations

Shape from shading (SFS) deals with the recovery of 3D shape from a single monocular image. This problem was formally introduced by Horn in the early 1970s. Since then it has received considerable attention, and several efforts have been made to improve the shape recovery. In this thesis, we present a fast SFS algorithm, which is a purely local method and is highly parallelizable. In our approach, we first use the discrete approximations for surface gradients, p and q, using finite differences, then linearize the reflectance function in depth, Z ( x , y), instead of p and q. This …


Analysis And Extension Of Model-Based Software Executives, Keith E. Lewis Jan 1995

Analysis And Extension Of Model-Based Software Executives, Keith E. Lewis

Theses and Dissertations

This research developed a comprehensive description of the simulation environment of Architect, a domain-oriented application composition system being developed at the Air Force Institute of Technology to explore new software engineering technologies. The description combines information from several previous research efforts and Architect's source code into a single, comprehensive document. A critical evaluation of the simulation environment was also performed, identifying improvements and modifications that enhance Architecture's application execution capabilities by reducing complexity and execution time. The analysis was then taken one step further and presented extensions to the current simulation environment. The extensions included investigating the feasibility of mixed-mode …


Visage: Improving The Ballistic Vulnerability Modeling And Analysis Process, Brett F. Grimes Jan 1995

Visage: Improving The Ballistic Vulnerability Modeling And Analysis Process, Brett F. Grimes

Theses and Dissertations

The purpose of this thesis was to improve the process of modeling and analyzing ballistic vulnerability data. This was accomplished by addressing two of the more urgent needs of vulnerability analysts; the ability to display fault tree data and to edit target descriptions. A vulnerability data visualization program called VISAGE was modified to meet these needs. VISAGE was originally created to preview static shotline plots and subsequently grew into a full-featured visualization package for vulnerability target descriptions and analyses data. The next logical step in the programs evolution was to include the needed editing and fault tree display capabilities. The …


Rule Extraction: From Neural Architecture To Symbolic Representation, Gail A. Carpenter, Ah-Hwee Tan Jan 1995

Rule Extraction: From Neural Architecture To Symbolic Representation, Gail A. Carpenter, Ah-Hwee Tan

Research Collection School Of Computing and Information Systems

This paper shows how knowledge, in the form of fuzzy rules, can be derived from a supervised learning neural network called fuzzy ARTMAP. Rule extraction proceeds in two stages: pruning, which simplifies the network structure by removing excessive recognition categories and weights; and quantization of continuous learned weights, which allows the final system state to be translated into a usable set of descriptive rules. Three benchmark studies illustrate the rule extraction methods: (1) Pima Indian diabetes diagnosis, (2) mushroom classification and (3) DNA promoter recognition. Fuzzy ARTMAP and ART-EMAP are compared with the ADAP algorithm, the k nearest neighbor system, …