Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Computer Sciences

Institution
Keyword
Publication Year
Publication
Publication Type
File Type

Articles 55411 - 55440 of 58014

Full-Text Articles in Physical Sciences and Mathematics

The Study Of Computer Science Concepts Through Game Play, Benjamin M. Weber Jan 1993

The Study Of Computer Science Concepts Through Game Play, Benjamin M. Weber

All Computer Science and Engineering Research

No abstract provided.


Neural Network Diagnosis Of Malignant Melanoma From Color Images, Fikret ErçAl, C. Chawla, William V. Stoecker, Randy Hays Moss Jan 1993

Neural Network Diagnosis Of Malignant Melanoma From Color Images, Fikret ErçAl, C. Chawla, William V. Stoecker, Randy Hays Moss

Computer Science Technical Reports

Malignant melanoma is the deadliest form of all skin cancers. Approximately 32,000 new cases of malignant melanoma were diagnosed in 1991, with approximately 80 percent of patients expected to survive five years [1]. Fortunately, if detected early, even malignant melanoma may be treated successfully. Thus, in recent years, there has been a rising interest in the automated detection and diagnosis of skin cancer, particularly malignant melanoma [2]. In this paper, we present a novel neural network approach for the automated separation of melanoma from three other benign categories of tumors which exhibit melanoma-like characteristics. Our approach is based on devising …


Clock Partitioning For Testability, Kent L. Einspahr, Sharad C. Seth, Vishwani D. Agrawal Jan 1993

Clock Partitioning For Testability, Kent L. Einspahr, Sharad C. Seth, Vishwani D. Agrawal

CSE Conference and Workshop Papers

An implementation of a design for testability model for sequential circuits is presented. The flip-flops in a sequential circuit are partitioned to reduce the number of cycles and the path lengths in each partition, thereby reducing the complexity of test generation. The implementation includes a Podem-based test generator. Preliminary results using the Contest sequential test generator are presented.


Planning And Certifying Software System Reliability, J. H. Poore, Harlan D. Mills, D. Mutchler Jan 1993

Planning And Certifying Software System Reliability, J. H. Poore, Harlan D. Mills, D. Mutchler

The Harlan D. Mills Collection

No abstract provided.


Syntactic Segmentation And Labeling Of Digitized Pages From Technical Journals, Mukkai Krishnamoorthy, George Nagy, Sharad C. Seth, Mahesh Viswanathan Jan 1993

Syntactic Segmentation And Labeling Of Digitized Pages From Technical Journals, Mukkai Krishnamoorthy, George Nagy, Sharad C. Seth, Mahesh Viswanathan

School of Computing: Faculty Publications

Alternating horizontal and vertical projection profiles are extracted from nested sub-blocks of scanned page images of technical documents. The thresholded profile strings are parsed using the compiler utilities Lex and Yacc. The significant document components are demarcated and identified by the recursive application of block grammars. Backtracking for error recovery and branch and bound for maximum-area labeling are implemented with Unix Shell programs. Results of the segmentation and labeling process are stored in a labeled X-Y tree. It is shown that families of technical documents that share the same layout conventions can be readily analyzed. More than 20 types of …


So Far (Schematically) Yet So Near (Semantically), Amit P. Sheth, Vipul Kashyap Jan 1993

So Far (Schematically) Yet So Near (Semantically), Amit P. Sheth, Vipul Kashyap

Kno.e.sis Publications

In a multidatabase system, schematic conflicts between two objects are usually of interest only when the objects have some semantic affinity. In this paper we try to reconcile the two perspectives. We first define the concept of semantic proximity and provide a semantic taxonomy. We then enumerate and classify the schematic and data conflicts. We discuss possible semantic similarities between two objects that have various types of schematic and data conflicts. Issues of uncertain information and inconsistent information are also addressed.


The Extent Of Multimedia Computer Applications In The Business World, Mei Yee Lim Jan 1993

The Extent Of Multimedia Computer Applications In The Business World, Mei Yee Lim

Honors Theses

The world, or at least I, was all agog when touch screens arrived. My initial contact with touch screens was at a science exhibition in Menara MPPJ, Petaling Jaya, Malaysia in July 1987. That particular exhibit, which encouraged visitors to participate and to experiment with, was a graphic program. Visitors could use their fingers to select the drawing tool or the color they want (which is something like the tools on a MacDraw screen on a Macintosh) and use their fingers to move around the screen to draw anything they want. That added to my fascination with the world of …


Functional Representation And Reasoning About The F/A-18 Aircraft Fuel System, M. Pegah, J. Sticklen, William E. Bond Jan 1993

Functional Representation And Reasoning About The F/A-18 Aircraft Fuel System, M. Pegah, J. Sticklen, William E. Bond

Computer Science Faculty Research & Creative Works

Functional reasoning, a subfield of model-based reasoning, is discussed. This approach uses abstractions of a device's purpose to index behaviors that achieve that purpose. Functional modeling, a variation on this method, also uses simulation as a core reasoning strategy. The complex causal knowledge of a device along functional lines is decomposed, then a causal story of how the device will operate in a particular situation given stated boundary conditions is composed. The application of the functional approach to modeling the fuel system of a F/A-18 aircraft is described. The representation of the F/A-18 fuel system includes 89 component devices, 92 …


On Transactional Workflows, Amit P. Sheth, Marek Rusinkiewicz Jan 1993

On Transactional Workflows, Amit P. Sheth, Marek Rusinkiewicz

Kno.e.sis Publications

The basic transaction model has evolved over time to incorporate more complex transactions structures and to take the advantage of semantics of higher-level operations that cannot be seen at the level of page reads and writes. Well known examples of such extended transaction models include nested and multi-level transactions. A number of relaxed transaction models have been defined in the last several years that permit a controlled relaxation of the transaction isolation and atomicity to better match the requirements of various database applications. Correctness criteria other than global serializability have also been proposed. Several examples of extended/relaxed transaction models are …


The Washington University Multimedia System, William D. Richard, Jerome R. Cox Jr., Brian Gottlieb, Ken Krieger Jan 1993

The Washington University Multimedia System, William D. Richard, Jerome R. Cox Jr., Brian Gottlieb, Ken Krieger

All Computer Science and Engineering Research

The Washington University Multimedia System (MMS) is a complete multimedia system capable of transmitting and receiving video, audio, and radiological images, in addition to normal network traffic, over the Washingon University broadband ATM network. The MMS consists of an ATMizer and three multimedia subsystems. The ATMizer implements the host interface, the interface to the ATM network, and the interface to the three multimedia subsystems. The video sybsystem encodes and decodes JPEG compressed video using two hardware compression engines. The audio subsystem encodes and decodes CD-quality stereo audio. The high-speed radiological image subsystem reformats radiological image data transmitted by a dedicated …


Research Proposal: Preference Acquisition Through Reconciliation Of Inconsistencies, Nilesh L. Jain Jan 1993

Research Proposal: Preference Acquisition Through Reconciliation Of Inconsistencies, Nilesh L. Jain

All Computer Science and Engineering Research

The quality of performance of a decision-support system (or an expert system) is determined to a large extent by its underlying preference model (or knowledge base). The difficulties in preference and knowledge acquisition make them a major focus of current research in decision-support and expert systems. Researchers have used various concepts to develop promising acquisition techniques. One of the concepts used is knowledge maintenence where the knowledge base is changed in response to incorrect or inadequate performance by the expert system. This dissertation investigates a preference acquisition technique based on the reconciliation of inconsistencies between the preference model and the …


Objective Evaluation Of Radiation Treatment Plans, Nilesh L. Jain, Michael G. Kahn Jan 1993

Objective Evaluation Of Radiation Treatment Plans, Nilesh L. Jain, Michael G. Kahn

All Computer Science and Engineering Research

The evaluation of radiation treatment plans involves making trade-offs among doses delivered to the tumor volumes and nearby normal tissues. Evaluating state-of-the-art three-dimensional (3D) plans is a difficult task because of the huge amount of planning data that needs to be deciphered. Multiattribute utility theory provides a methodology for specifying trade-offs and selecting the optimal plan from many competing lans. Using multiattribute utility theory, we are developing a clinically meaningful objective plan-evaluation model for 3D radiation treatment plans. Our model incorporates three of the factors involved in radiation treatment evaluation - treatment preferences of the radiation oncologist, clinical condition of …


Teaching A Smarter Learner, Sally A. Goldman, H. David Mathias Jan 1993

Teaching A Smarter Learner, Sally A. Goldman, H. David Mathias

All Computer Science and Engineering Research

We introduce a formal model of teaching in which the teacher is tailored to a particular learner, yet the teaching protocol is designed so that no collusion is possible. Not surprisingly, such a model remedies the non-intuitive aspects of otehr models in which the teacher must successfully teach any consistent learner. We prove that any class that can be exactly identified by a deterministic polynomial-time algorithm with access to a very rich set of example-based queries is teachable by a computationally unbounded teacher and a polynomial-time learner. In addition, we present other general results relating this model of teaching to …


Improving The Speed Of A Distributed Checkpointing Algorithm, Sachin Garg, Kenneth F. Wong Jan 1993

Improving The Speed Of A Distributed Checkpointing Algorithm, Sachin Garg, Kenneth F. Wong

All Computer Science and Engineering Research

This paper shows how Koo and Toueg's distributed checkpointing algorithm can be modified so as to substantially reduce the average message volume. It attempts to avoid O(n{squared}) messages by using dependency knowledge to reduce the number of checkpoint request messages. Lemmas on consistency and termination are also included.


Trainrec: A System For Training Feedforward & Simple Recurrent Networks Efficiently And Correctly, Barry L. Kalman, Stan C. Kwasny Jan 1993

Trainrec: A System For Training Feedforward & Simple Recurrent Networks Efficiently And Correctly, Barry L. Kalman, Stan C. Kwasny

All Computer Science and Engineering Research

TRAINREC is a system for training feedforward and recurrent neural networks that incorporates several ideas. It uses the conjugate-gradient method which is demonstrably more efficient than traditional backward error propagation. We assume epoch-based training and derive a new error function having several desirable properties absent from the traditional sum-of-squared-error function. We argue for skip (shortcut) connections where appropriate and the preference for a sigmoidal yielding values over the [-1,1] interval. The input feature space is often over-analyzed, but by using singular value decomposition, input patterns can be conditioned for better learning often with a reduced number of input units. Recurrent …


A Characterization Of The Computational Power Of Rule-Based Visualization, Kenneth C. Cox, Gruia-Catalin Roman Jan 1993

A Characterization Of The Computational Power Of Rule-Based Visualization, Kenneth C. Cox, Gruia-Catalin Roman

All Computer Science and Engineering Research

Declarative visualization is a paradigm in which the process of visualization is treated as a mapping from some domain (typically a program) to an image. One means of declaring such mappings is through the use of rules which specify the relationship between the domain and the image. This paper examines the computational power of such rule-based mappings. Computational power is measure using three separate criteria. The first of these uses the Chomsky hierarchy, in which computational power is treated as string-acceptance; with this criterion we are able to show that certain rule-based models are equivalent in power to Turing machines. …


Logical Interference In Symmetric Connectionist Networks, Gadi Pinkas Jan 1993

Logical Interference In Symmetric Connectionist Networks, Gadi Pinkas

All Computer Science and Engineering Research

This work delineates the relation between logic and symmetric neural networks. The motivation is two-fold: 1) to study the capabilities and limitations of connectionist networks with respect to knowledge representatoin; and 2) to develop a new kind of inference negine that is expressive, massively parallel, capable of coping with nonmonotonic or noisy knowledge and capable of learning. The thesis shows that propositional logic can be implemented efficiently in networks where hidden units allow the representation of arbitrary constraints. An inference engine is constructed which can obtain its knowledge either by compiling symbolic rules or by learning them inductively from examples. …


A Comparison Study Of The Pen And The Mouse In Editing Graphic Diagrams, Ajay Apte, Takayuki Dan Kimura Jan 1993

A Comparison Study Of The Pen And The Mouse In Editing Graphic Diagrams, Ajay Apte, Takayuki Dan Kimura

All Computer Science and Engineering Research

We report the results of an experiment comparing the merits of the pen and the mouse as drawing devices. For this study a pen-based graphic diagram editor equipped with a shape recognition algorithm was developed on GO's PenPoint operating system. A commercially available drawing program on NeXT was used for mouse-based editing. Twelve CS students were chosen as subjects and asked to draw four different diagrams of similar complexity: two with a pen and the other two with a mouse. The diagrams are chosen from the categories of dataflow visual language, Petri nets, flowcharts, and state diagrams. The results indicate …


A Taxonomy Of Program Visualization Systems, Gruia-Catalin Roman, Kenneth C. Cox Jan 1993

A Taxonomy Of Program Visualization Systems, Gruia-Catalin Roman, Kenneth C. Cox

All Computer Science and Engineering Research

Program visualization may be viewed as a mapping from programs to graphical representations. This simple idea provides a formal framework for a new taxonomy of program visualization systems. The taxonomy is compared briefly against previous attempts to organize the program visualization field. The taxonomic principles and their motivation are explained in detail with reference to a number of existing systems, especially Balsa, Tango, and Pavane.


Asking Questions To Minimize Errors, Nader H. Bshouty, Sally A. Goldman, Thomas R. Hancock, Sleiman Matar Jan 1993

Asking Questions To Minimize Errors, Nader H. Bshouty, Sally A. Goldman, Thomas R. Hancock, Sleiman Matar

All Computer Science and Engineering Research

A number of efficient learning algorithms achieve exact identification of an unknown function from some clas using membership and equivalence queries. Using a standard transformation such algorithms can easily be converted to on-line learning algorithms that use membership queries. Under such a transformation the number of equivalence queries made by the query algorithm directly corresponds to the number of mistakes made by the on-line algorithm. In this paper we consider several of the natural classes known to be learnable in this setting, and investigate the minimum number of equivalence queries with accompanying counterexamples (or equivalently the minimum number of mistakes …


Fril - A Fractal Intermediate Language, Ron Cytron, David Shields Jan 1993

Fril - A Fractal Intermediate Language, Ron Cytron, David Shields

All Computer Science and Engineering Research

This document describes the motivation, language description, and experience using FrIL, an intermediate language for a compiler's "middle-end." FrIL has subbessfully supported a two-semester compiler construction sequence, where the first semester included code generation from a C-like language and the second semester included advanced data flow analysis and program transformation.


The Pessimism Behind Optimistic Simulation, George Varghese, Roger D. Chamberlain, William E. Weihl Jan 1993

The Pessimism Behind Optimistic Simulation, George Varghese, Roger D. Chamberlain, William E. Weihl

All Computer Science and Engineering Research

In this paper we make an analogy between the time that storage must be maintained in an optimistic simulation and the blocking time in a conservative simulation. By exploring this analogy, we design two new Global Virtual Time (GVT) protocols for Time Warp systems. The first simple protocol is based on the null message scheme proposed for clock advancement in some conservative approaches; this yields what we call Local Guaranteed Time. Our main contribution is a second new protocol that is inspired by Misra's circulating marker scheme for deadlock recovery in conservative simulations, and appears to have advantages over previous …


The Multicomputer Toolbox - First-Generation Scalable Libraries, Anthony Skjellum, Alvin Leung, Steven G. Smith, Robert D. Falgout Jan 1993

The Multicomputer Toolbox - First-Generation Scalable Libraries, Anthony Skjellum, Alvin Leung, Steven G. Smith, Robert D. Falgout

Northeast Parallel Architecture Center

"First-generation" scalable parallel libraries have been achieved, and are maturing, within the Multicomputer Toolbox. The Toolbox includes sparse, dense, iterative linear algebra, a stiff ODE/DAE solver, and an open software technology for additional numerical algorithms, plus an inter-architecture Makefile mechanism for building applications. We have devised C-based strategies for useful classes of distributed data structures, including distributed matrices and vectors. The underlying Zipcodemessage passing system has enabled process-grid abstractions of multicomputers, communication contexts, and process groups, all characteristics needed for building scalable libraries, and scalable application software. We describe the data-distribution-independent approach to building scalable libraries, which is needed so …


A Message Passing Interface For Parallel And Distributed Computing, Salim Hariri, Jongbaek Park, Fang-Kuo Yu, Manish Parashar Jan 1993

A Message Passing Interface For Parallel And Distributed Computing, Salim Hariri, Jongbaek Park, Fang-Kuo Yu, Manish Parashar

Northeast Parallel Architecture Center

The proliferation of high performance workstations and the emergence of high speed networks have attracted a lot of interest in parallel and distributed computing (PDC). We envision that PDC environments with supercomputing capabilities will be available in the near future. However, a number of hardware and software issues have to be resolved before the full potential of these PDC environments can be exploited. The presented research has the following objectives: (1) to characterize the message-passing primitives used in parallel and distributed computing; (2) to develop a communication protocol that supports PDC; and (3) to develop an architectural support for PDC …


An Interpretive Framework For Application Performance Prediction, Manish Parashar, Salim Hariri, Tomasz Haupt, Geoffrey C. Fox Jan 1993

An Interpretive Framework For Application Performance Prediction, Manish Parashar, Salim Hariri, Tomasz Haupt, Geoffrey C. Fox

Northeast Parallel Architecture Center

Software development in parallel/distributed environment is a non-trivial task and depends greatly on the availability of appropriate support in terms of development tools and environments. Performance prediction /evaluation tools form a critical part of any software development environment as they enable the developer to visualize the effects of various design choices on the performance of the application. This paper presents an interpretive model for a source driven performance prediction framework. A prototype framework based on the proposed model has been developed for the iPSC/860 system. Numerical results obtained on this system are presented. These results confirm the potential of interpretive …


A Compilation Approach For Fortran 90d/Hpf Compilers On Distributed Memory Mimd Computers, Zeki Bozkus, Alok Choudhary, Geoffrey C. Fox, Tomasz Haupt Jan 1993

A Compilation Approach For Fortran 90d/Hpf Compilers On Distributed Memory Mimd Computers, Zeki Bozkus, Alok Choudhary, Geoffrey C. Fox, Tomasz Haupt

Northeast Parallel Architecture Center

This paper describes a compilation approach for a Fortran 90D/HPF compiler, a source-to-source parallel compiler for distributed memory systems. Different from Fortran 77 parallelizing compilers, a Fortran90D/HPF compiler does not parallelize sequential constructs. Only parallelism expressed by Fortran 90D/HPF parallel constructs is exploited. The methodology of parallelizing Fortran programs such as computation partitioning, communication detection and generation, and the run-time support for the compiler are discussed. An example of Gaussian Elimination is used to illustrate the compilation techniques with performance results.


Hierarchical Tree-Structures As Adaptive Meshes, David J. Edelsohn Jan 1993

Hierarchical Tree-Structures As Adaptive Meshes, David J. Edelsohn

Northeast Parallel Architecture Center

Introduction: Two basic types of simulations exist for modeling systems of many particles: grid-based (point particles indirectly interacting with one another through the potential calculated from equivalent particle densities on a mesh) and particle-based (point particles directly interacting with a one another through potentials at their positions calculated from the other particles in the system). Grid-based solvers traditionally model continuum problems, such as fluid and gas systems, and mixed particle-continuum systems. Particle-based solvers find more use modeling discrete systems such as stars within galaxies or other rarefied gases. Many different physical systems, including electromagnetic interactions, gravitational interactions, and fluid vortex …


A Methodology For Developing High Performance Computing Models: Storm-Scale Weather Prediction, Nikos Chrisochoides, Kelvin Droegemeier, Geoffrey C. Fox, Kim Mills, Ming Xue Jan 1993

A Methodology For Developing High Performance Computing Models: Storm-Scale Weather Prediction, Nikos Chrisochoides, Kelvin Droegemeier, Geoffrey C. Fox, Kim Mills, Ming Xue

Northeast Parallel Architecture Center

A methodology for developing future generations of a storm-scale weather prediction model for Massively Parallel Processing is described. The forecast model is the Advanced Regional Prediction System (ARPS), a three-dimensional, fully compressible, non-hydrostatic predictive model. In the short term, the computational goals include developing a portable, scalable model for distributed memory SIMD and MIMD architectures, while preserving a high degree of modularity to support rapid design and validation, maintainability, educational goals and operational testing. Longer term computational goals include a parallel adaptive mesh refinement scheme. A FortranD/High Performance Fortran version of the ARPS provides portability in the current version of …


Fortran 90d/Hpf Compiler For Distributed Memory Mimd Computers: Design, Implementation, And Performance Results, Zeki Bozkus, Alok Choudhary, Geoffrey C. Fox, Tomasz Haupt Jan 1993

Fortran 90d/Hpf Compiler For Distributed Memory Mimd Computers: Design, Implementation, And Performance Results, Zeki Bozkus, Alok Choudhary, Geoffrey C. Fox, Tomasz Haupt

Northeast Parallel Architecture Center

Fortran 90D/HPF is a data parallel language with special directives to enable users to specify data alignment and distributions. This paper describes the design and implementation of a Fortran90D/HPF compiler. Techniques for data and computation partitioning, communication detection and generation, and the run-time support for the compiler are discussed. Finally, initial performance results for the compiler are presented which show that the code produced by the compiler is portable, yet efficient. We believe that the methodology to process data distribution, computation partitioning, communication system design and the overall compiler design can be used by the implementors of HPF compilers.


Runtime Compilation Techniques For Data Partitioning And Communication Schedule Reuse, Ravi Ponnusamy, Joel Saltz, Alok Choudhary Jan 1993

Runtime Compilation Techniques For Data Partitioning And Communication Schedule Reuse, Ravi Ponnusamy, Joel Saltz, Alok Choudhary

Northeast Parallel Architecture Center

In this paper, we describe two new ideas by which HPF compiler can deal with irregular computations effectively. The first mechanism invokes a user specified mapping procedure via a set of compiler directives. The directives allow the user to use program arrays to describe graph connectivity, spatial location of army elements and computational load. The second is a simple conservative method that in many cases enables a compiler to recognize that it is possible to reuse previously computed results from inspectors (e.g. communication schedules, loop iteration partitions, information that associates off-processor data copies with on-processor buffer locations). We present performance …