Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

University of Texas at El Paso

Discipline
Keyword
Publication Year
Publication
Publication Type

Articles 1831 - 1860 of 2316

Full-Text Articles in Physical Sciences and Mathematics

Space-Time Assumptions Behind Np-Hardness Of Propositional Satisfiability, Olga Kosheleva, Vladik Kreinovich Jun 2013

Space-Time Assumptions Behind Np-Hardness Of Propositional Satisfiability, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

For some problems, we know feasible algorithms for solving them. Other computational problems (such as propositional satisfiability) are known to be NP-hard, which means that, unless P=NP (which most computer scientists believe to be impossible), no feasible algorithm is possible for solving all possible instances of the corresponding problem. Most usual proofs of NP-hardness, however, use Turing machine -- a very simplified version of a computer -- as a computation model. While Turing machine has been convincingly shown to be adequate to describe what can be computed in principle, it is much less intuitive that these oversimplified machine are …


Beyond Traditional Chemical Kinetics Formulas: Group-Theoretic Approach, Vladik Kreinovich Jun 2013

Beyond Traditional Chemical Kinetics Formulas: Group-Theoretic Approach, Vladik Kreinovich

Departmental Technical Reports (CS)

According to the traditional formulas of chemical kinetics, the rate is proportional to the product of concentrations of reagents. This formula leads to a reasonable description of interactions both in chemistry and in other disciplines (e.g., in ecology). However, in many cases, these formulas are only approximate. Several semi-empirical formulas have been designed to more accurately describe the interaction rate. The problem is that most of these formulas are purely empirical, they lack a convincing theoretical explanation. In this paper, we show that a group-theoretic approach -- taking into account natural symmetries of the systems -- leads to the desired …


√(X2 + Μ) Is The Most Computationally Efficient Smooth Approximation To |X|: A Proof, Carlos Ramirez, Reinaldo Sanchez, Vladik Kreinovich, Miguel Argaez Jun 2013

√(X2 + Μ) Is The Most Computationally Efficient Smooth Approximation To |X|: A Proof, Carlos Ramirez, Reinaldo Sanchez, Vladik Kreinovich, Miguel Argaez

Departmental Technical Reports (CS)

In many practical situations, we need to minimize an expression of the type |c1| + ... + |cn|. The problem is that most efficient optimization techniques use the derivative of the objective function, but the function |x| is not differentiable at 0. To make optimization efficient, it is therefore reasonable to approximate |x| by a smooth function. We show that in some reasonable sense, the most computationally efficient smooth approximation to |x| is the function √(x2 + μ), a function which has indeed been successfully used in such optimization.


Towards A Physically Meaningful Definition Of Computable Discontinuous And Multi-Valued Functions (Constraints), Martine Ceberio, Olga Kosheleva, Vladik Kreinovich May 2013

Towards A Physically Meaningful Definition Of Computable Discontinuous And Multi-Valued Functions (Constraints), Martine Ceberio, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

In computable mathematics, there are known definitions of computable numbers, computable metric spaces, computable compact sets, and computable functions. A traditional definition of a computable function, however, covers only continuous functions. In many applications (e.g., in phase transitions), physical phenomena are described by discontinuous or multi-valued functions (a.k.a. constraints). In this paper, we provide a physics-motivated definition of computable discontinuous and multi-valued functions, and we analyze properties of this definition.


Peak-End Rule: A Utility-Based Explanation, Olga Kosheleva, Martine Ceberio, Vladik Kreinovich May 2013

Peak-End Rule: A Utility-Based Explanation, Olga Kosheleva, Martine Ceberio, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, people judge their overall experience by only taking into account the peak and the last levels of pleasantness or unpleasantness. While this peak-end rule is empirically supported by numerous psychological experiments, it seems to contradict our general theoretical ideas about people's preferences. In this paper, we show that, contrary to this impression, the end-peak rule can be justified based on the main ideas of the traditional utility-based decision theory.


Using Symmetries (Beyond Geometric Symmetries) In Chemical Computations: Computing Parameters Of Multiple Binding Sites, Andres Ortiz, Vladik Kreinovich May 2013

Using Symmetries (Beyond Geometric Symmetries) In Chemical Computations: Computing Parameters Of Multiple Binding Sites, Andres Ortiz, Vladik Kreinovich

Departmental Technical Reports (CS)

We show how group-theoretic ideas can be naturally used to generate efficient algorithms for scientific computations. The general group-theoretic approach is illustrated on the example of determining, from the experimental data, the dissociation constants related to multiple binding sites. We also explain how the general group-theoretic approach is related to the standard (backpropagation) neural networks; this relation justifies the potential universal applicability of the group-theoretic approach.


Processing Quantities With Heavy-Tailed Distribution Of Measurement Uncertainty: How To Estimate The Tails Of The Results Of Data Processing, Michal Holčapek, Vladik Kreinovich May 2013

Processing Quantities With Heavy-Tailed Distribution Of Measurement Uncertainty: How To Estimate The Tails Of The Results Of Data Processing, Michal Holčapek, Vladik Kreinovich

Departmental Technical Reports (CS)

Measurements are never absolutely accurate; so, it is important to estimate how the measurement uncertainty affects the result of data processing. Traditionally, this problem is solved under the assumption that the probability distributions of measurement errors are normal -- or at least are concentrated, with high certainty, on a reasonably small interval. In practice, the distribution of measurement errors is sometimes heavy-tailed, when very large values have a reasonable probability. In this paper, we analyze the corresponding problem of estimating the tail of the result of data processing in such situations.


Necessary And Sufficient Conditions For Generalized Uniform Fuzzy Partitions, Michal Holčapek, Irina Perfilieva, Vilém Novák, Vladik Kreinovich May 2013

Necessary And Sufficient Conditions For Generalized Uniform Fuzzy Partitions, Michal Holčapek, Irina Perfilieva, Vilém Novák, Vladik Kreinovich

Departmental Technical Reports (CS)

The fundamental concept in the theory of fuzzy transform (F-transform) is that of fuzzy partition. The original definition assumes that each two fuzzy subsets overlap in such a way that sum of membership degrees in each point is equal to 1. However, this condition can be generalized to obtain a denser fuzzy partition that leads to improvement of approximation properties of F-transform. However, a problem arises how one can effectively construct such type of fuzzy partitions. We use a generating function having special properties and it is not immediately clear whether it really defines a general uniform fuzzy partition. In …


Cjc: An Extensible Checker For The Cleanjava Annotation Language, Cesar Yeep May 2013

Cjc: An Extensible Checker For The Cleanjava Annotation Language, Cesar Yeep

Departmental Technical Reports (CS)

CleanJava is a formal annotation language for the Java programming language to support a Cleanroom-style functional program verification technique that views programs as mathematical functions. It needs a suite of support tools including a checker that can parse annotations and check them for syntactic and static semantic correctness. The two key requirements of the checker are flexibility and extensibility. Since the language is still under development and refinement, it should be flexible to facilitate language experimentation and accommodate language changes. It should be also extensible to provide base code for developing more advanced support tools like an automated theorem prover. …


Towards A Better Understanding Of Space-Time Causality: Kolmogorov Complexity And Causality As A Matter Of Degree, Vladik Kreinovich, Andres Ortiz Apr 2013

Towards A Better Understanding Of Space-Time Causality: Kolmogorov Complexity And Causality As A Matter Of Degree, Vladik Kreinovich, Andres Ortiz

Departmental Technical Reports (CS)

Space-time causality is one of the fundamental notions of modern physics; however, it is difficult to define in observational physical terms. Intuitively, the fact that a space-time event e=(t,x) can causally influence an event e'=(t',x') means that what we do in the vicinity of e changes what we observe at e'. If we had two copies of the Universe, we could perform some action at e in one copy but not in another copy; if we then observe the difference at e', this would be an indication of causality. However, we only observe one Universe, in which we either perform …


Imprecise Probabilities In Engineering Analyses, Michael Beer, Scott Ferson, Vladik Kreinovich Apr 2013

Imprecise Probabilities In Engineering Analyses, Michael Beer, Scott Ferson, Vladik Kreinovich

Departmental Technical Reports (CS)

Probabilistic uncertainty and imprecision in structural parameters and in environmental conditions and loads are challenging phenomena in engineering analyses. They require appropriate mathematical modeling and quantification to obtain realistic results when predicting the behavior and reliability of engineering structures and systems. But the modeling and quantification is complicated by the characteristics of the available information, which involves, for example, sparse data, poor measurements and subjective information. This raises the question whether the available information is sufficient for probabilistic modeling or rather suggests a set-theoretical approach. The framework of imprecise probabilities provides a mathematical basis to deal with these problems which …


Full Superposition Principle Is Inconsistent With Non-Deterministic Versions Of Quantum Physics, Andres Ortiz, Vladik Kreinovich Apr 2013

Full Superposition Principle Is Inconsistent With Non-Deterministic Versions Of Quantum Physics, Andres Ortiz, Vladik Kreinovich

Departmental Technical Reports (CS)

Many practical systems are non-deterministic, in the sense that available information about the initial states and control values does not uniquely determine the future states. For some such systems, it is important to take quantum effects into account. For that, we need to develop non-deterministic versions of quantum physics. In this paper, we show that for non-deterministic versions of quantum physics, we cannot require superposition principle -- one of the main fundamental principles of modern quantum mechanics. Specifically, while we can consider superpositions of states corresponding to the same version of the future dynamics, it is not consistently possible to …


An Ontological Approach To Capture Data Provenance Across Multiple Platforms, Leonardo Salayandia Apr 2013

An Ontological Approach To Capture Data Provenance Across Multiple Platforms, Leonardo Salayandia

Departmental Technical Reports (CS)

The process of collecting and transforming data can extend across different platforms, both physical and digital. Capturing provenance that reflects the actions involved in such a process in a consistent manner can be difficult and involve the use of multiple tools. An approach based on formal ontologies and software engineering practices is presented to capture data provenance. The approach starts by creating ontologies about data collection and transformation processes. These ontologies, referred to as Workflow-Driven Ontologies, establish a consistent view of the process that is independent of the platform used to carry out the process. Next, software modules are generated, …


A New Analog Optical Processing Scheme For Solving Np-Hard Problems, Michael Zakharevich, Vladik Kreinovich Apr 2013

A New Analog Optical Processing Scheme For Solving Np-Hard Problems, Michael Zakharevich, Vladik Kreinovich

Departmental Technical Reports (CS)

Many real-life problems are, in general, NP-hard, i.e., informally speaking, are difficult to solve. To be more precise, a problem p is NP-hard means that every problem from the class NP can be reduced to this problem p. Thus, if we have an efficient algorithm for solving one NP-hard problem, we can use this reduction to get a more efficient way of solving all the problems from the class NP. To speed up computations, it is reasonable to base them on the fastest possible physical process -- i.e., on light. It is known that analog optical processing indeed speeds up …


Likert-Scale Fuzzy Uncertainty From A Traditional Decision Making Viewpoint: It Incorporates Both Subjective Probabilities And Utility Information, Joe Lorkowski, Vladik Kreinovich Mar 2013

Likert-Scale Fuzzy Uncertainty From A Traditional Decision Making Viewpoint: It Incorporates Both Subjective Probabilities And Utility Information, Joe Lorkowski, Vladik Kreinovich

Departmental Technical Reports (CS)

One of the main methods for eliciting the values of the membership function μ(x) is to use the Likert scales, i.e., to ask the user to mark his or her degree of certainty by an appropriate mark k on a scale from 0 to n and take μ(x)=k/n. In this paper, we show how to describe this process in terms of the traditional decision making. Our conclusion is that the resulting membership degrees incorporate both probability and utility information. It is therefore not surprising that fuzzy techniques often work better than probabilistic techniques -- which only take into account the …


Towards Model Fusion In Geophysics: How To Estimate Accuracy Of Different Models, Omar Ochoa, Aaron A. Velasco, Christian Servin Mar 2013

Towards Model Fusion In Geophysics: How To Estimate Accuracy Of Different Models, Omar Ochoa, Aaron A. Velasco, Christian Servin

Departmental Technical Reports (CS)

In geophysics, we usually have several Earth models based on different types of data: seismic, gravity, etc. Each of these models captures some aspects of the Earth structure. To get the more description of the Earth, it is desirable to "fuse" these models into a single one. To appropriately fuse the models, we need to know the accuracy of different models. In this paper, we show that the traditional methods cannot be directly used to estimate these accuracies, and we propose a new method for such estimation.


Why ℓ1 Is A Good Approximation To ℓ0: A Geometric Explanation, Carlos Ramirez, Vladik Kreinovich, Miguel Argaez Mar 2013

Why ℓ1 Is A Good Approximation To ℓ0: A Geometric Explanation, Carlos Ramirez, Vladik Kreinovich, Miguel Argaez

Departmental Technical Reports (CS)

In practice, we usually have partial information; as a result, we have several different possibilities consistent with the given measurements and the given knowledge. For example, in geosciences, several possible density distributions are consistent with the measurement results. It is reasonable to select the simplest among such distributions. A general solution can be described, e.g., as a linear combination of basic functions. A natural way to define the simplest solution is to select a one for which the number of the non-zero coefficients ci is the smallest. The corresponding "l0-optimization" problem is non-convex and therefore, difficult to …


For Describing Uncertainty, Ellipsoids Are Better Than Generic Polyhedra And Probably Better Than Boxes: A Remark, Olga Kosheleva, Vladik Kreinovich Mar 2013

For Describing Uncertainty, Ellipsoids Are Better Than Generic Polyhedra And Probably Better Than Boxes: A Remark, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

For a single quantity, the set of all possible values is usually an interval. An interval is easy to represent in a computer: e.g., we can store its two endpoints. For several quantities, the set of possible values may have an arbitrary shape. An exact description of this shape requires infinitely many parameters, so in a computer, we have to use a finite-parametric approximation family of sets. One of the widely used methods for selecting such a family is to pick a symmetric convex set and to use its images under all linear transformations. If we pick a unit ball, …


Towards Fuzzy Method For Estimating Prediction Accuracy For Discrete Inputs, With Application To Predicting At-Risk Students, Xiaojing Wang, Martine Ceberio, Angel F. Garcia Contreras Mar 2013

Towards Fuzzy Method For Estimating Prediction Accuracy For Discrete Inputs, With Application To Predicting At-Risk Students, Xiaojing Wang, Martine Ceberio, Angel F. Garcia Contreras

Departmental Technical Reports (CS)

In many practical situations, we need, given the values of the observed quantities x1, ..., xn, to predict the value of a desired quantity y. To estimate the accuracy of a prediction algorithm f(x1, ..., xn), we need to compare the results of this algorithm's prediction with the actually observed values.

The value y usually depends not only on the values x1, ..., xn, but also on values of other quantities which we do not measure. As a result, even when we have the exact same values of the quantities x1, ..., xn, we may get somewhat different values of …


Estimating Third Central Moment C3 For Privacy Case Under Interval And Fuzzy Uncertainty, Ali Jalal-Kamali, Vladik Kreinovich Mar 2013

Estimating Third Central Moment C3 For Privacy Case Under Interval And Fuzzy Uncertainty, Ali Jalal-Kamali, Vladik Kreinovich

Departmental Technical Reports (CS)

Some probability distributions (e.g., Gaussian) are symmetric, some (e.g., lognormal) are non-symmetric ({\em skewed}). How can we gauge the skeweness? For symmetric distributions, the third central moment C3 = E[(x - E(x))3] is equal to 0; thus, this moment is used to characterize skewness. This moment is usually estimated, based on the observed (sample) values x1, ..., xn, as C3 = (1/n) * ((x1 - E)3 + ... + (xn - E)3), where E = (1/n) * (x1 + ... + xn). In many …


Data Anonymization That Leads To The Most Accurate Estimates Of Statistical Characteristics: Fuzzy-Motivated Approach, G. Xiang, S. Ferson, L. Ginzburg, L. Longpre, E. Mayorga, O. Kosheleva Mar 2013

Data Anonymization That Leads To The Most Accurate Estimates Of Statistical Characteristics: Fuzzy-Motivated Approach, G. Xiang, S. Ferson, L. Ginzburg, L. Longpre, E. Mayorga, O. Kosheleva

Departmental Technical Reports (CS)

To preserve privacy, the original data points (with exact values) are replaced by boxes containing each (inaccessible) data point. This privacy-motivated uncertainty leads to uncertainty in the statistical characteristics computed based on this data. In a previous paper, we described how to minimize this uncertainty under the assumption that we use the same standard statistical estimates for the desired characteristics. In this paper, we show that we can further decrease the resulting uncertainty if we allow fuzzy-motivated weighted estimates, and we explain how to optimally select the corresponding weights.


How To Generate Worst-Case Scenarios When Testing Already Deployed Systems Against Unexpected Situations, Francisco Zapata, Ricardo Pineda, Martine Ceberio Mar 2013

How To Generate Worst-Case Scenarios When Testing Already Deployed Systems Against Unexpected Situations, Francisco Zapata, Ricardo Pineda, Martine Ceberio

Departmental Technical Reports (CS)

Before a complex system is deployed, it is tested -- but it is tested against known operational mission, under several known operational scenarios. Once the system is deployed, new possible unexpected and/or uncertain operational scenarios emerge. It is desirable to develop methodologies to test the system against such scenarios. A possible methodology to test the system would be to generate the worst case scenario that we can think of -- to understand, in principle, the behavior of the system. So, we face a question of generating such worst-case scenarios. In this paper, we provide some guidance on how to generate …


Security Games With Interval Uncertainty, Christopher Kiekintveld, Towhidul Islam, Vladik Kreinovich Feb 2013

Security Games With Interval Uncertainty, Christopher Kiekintveld, Towhidul Islam, Vladik Kreinovich

Departmental Technical Reports (CS)

Security games provide a framework for allocating limited security resources in adversarial domains, and are currently used in applications including security at the LAX airport, scheduling for the Federal Air Marshals, and patrolling strategies for the U.S. Coast Guard. One of the major challenges in security games is finding solutions that are robust to uncertainty about the game model. Bayesian game models have been developed to model uncertainty, but algorithms for these games do not scale well enough for many applications, and the problem is NP-hard.

We take an alternative approach based on using intervals to model uncertainty in security …


F-Transform In View Of Aggregation Functions, Irina Perfilieva, Vladik Kreinovich Feb 2013

F-Transform In View Of Aggregation Functions, Irina Perfilieva, Vladik Kreinovich

Departmental Technical Reports (CS)

A relationship between the discrete F-transform and aggregation functions is analyzed. We show that the discrete F-transform (direct or inverse) can be associated with a set of linear aggregation functions that respect a fuzzy partition of a universe. On the other side, we discover conditions that should be added to a set of linear aggregation functions in order to obtain the discrete F-transform. Last but not least, the relationship between two analyzed notions is based on a new (generalized) definition of a fuzzy partition without the Ruspini condition.


Brans-Dicke Scalar-Tensor Theory Of Gravitation May Explain Time Asymmetry Of Physical Processes, Olga Kosheleva, Vladik Kreinovich Feb 2013

Brans-Dicke Scalar-Tensor Theory Of Gravitation May Explain Time Asymmetry Of Physical Processes, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Most fundamental physical equations remain valid if we reverse the time order. Thus, if we start with a physical process (which satisfies these equations) and reverse time order, the resulting process also satisfies all the equations and thus, should also be physically reasonable. In practice, however, many physical processes are not reversible: e.g., a cup can break into pieces, but the pieces cannot magically get together and become a whole cup. In this paper, we show that the Brans-Dicke Scalar-Tensor Theory of Gravitation, one of the most widely used generalizations of Einstein's General relativity, is, in effect, time-asymmetric. This time-asymmetry …


Why Complex-Valued Fuzzy? Why Complex Values In General? A Computational Explanation, Olga Kosheleva, Vladik Kreinovich, Thavatchai Ngamsantivong Feb 2013

Why Complex-Valued Fuzzy? Why Complex Values In General? A Computational Explanation, Olga Kosheleva, Vladik Kreinovich, Thavatchai Ngamsantivong

Departmental Technical Reports (CS)

In the traditional fuzzy logic, as truth values, we take all real numbers from the interval [0,1]. In some situations, this set is not fully adequate for describing expert uncertainty, so a more general set is needed. From the mathematical viewpoint, a natural extension of real numbers is the set of complex numbers. Complex-valued fuzzy sets have indeed been successfully used in applications of fuzzy techniques. This practical success leaves us with a puzzling question: why complex-valued degree of belief, degrees which do not seem to have a direct intuitive meaning, have been so successful? In this paper, we use …


Checking Monotonicity Is Np-Hard Even For Cubic Polynomials, Andrzej Pownuk, Luc Longpre, Vladik Kreinovich Feb 2013

Checking Monotonicity Is Np-Hard Even For Cubic Polynomials, Andrzej Pownuk, Luc Longpre, Vladik Kreinovich

Departmental Technical Reports (CS)

One of the main problems of interval computations is to compute the range of a given function over given intervals. In general, this problem is computationally intractable (NP-hard) -- that is why we usually compute an enclosure and not the exact range. However, there are cases when it is possible to feasibly compute the exact range; one of these cases is when the function is monotonic with respect to each of its variables. The monotonicity assumption holds when the derivatives at a midpoint are different from 0 and the intervals are sufficiently narrow; because of this, monotonicity-based estimates are often …


Filtering Out High Frequencies In Time Series Using F-Transform, Vilém Novák, Irina Perfilieva, Michal Holčapek, Vladik Kreinovich Feb 2013

Filtering Out High Frequencies In Time Series Using F-Transform, Vilém Novák, Irina Perfilieva, Michal Holčapek, Vladik Kreinovich

Departmental Technical Reports (CS)

In this paper, we will focus on the application of fuzzy transform (F-transform) in the analysis of time series. We assume that the time series can be decomposed into three constituent components: the trend-cycle, seasonal component and random noise. We will demonstrate that by using F-transform, we can approximate the trend-cycle of a given time series with high accuracy.


Constructing Verifiably Correct Java Programs Using Ocl And Cleanjava, Yoonsik Cheon, Carmen Avila Feb 2013

Constructing Verifiably Correct Java Programs Using Ocl And Cleanjava, Yoonsik Cheon, Carmen Avila

Departmental Technical Reports (CS)

A recent trend in software development is building a precise model that can be used as a basis for the software development. Such a model may enable an automatic generation of working code, and more importantly it provides a foundation for correctness reasoning of code. In this paper we propose a practical approach for constructing a verifiably correct program from such a model. The key idea of our approach is (a) to systematically translate formally-specified design constraints such as class invariants and operation pre and postconditions to code-level annotations and (b) to use the annotations for the correctness proof of …


Comparing Intervals And Moments For The Quantification Of Coarse Information, Michael Beer, Vladik Kreinovich Feb 2013

Comparing Intervals And Moments For The Quantification Of Coarse Information, Michael Beer, Vladik Kreinovich

Departmental Technical Reports (CS)

In this paper the problem of the most appropriate modeling of scarce information for an engineering analysis is investigated. This investigation is focused on a comparison between a rough probabilistic modeling based on the first two moments and interval modeling. In many practical cases, the available information is limited to such an extent that a more thorough modeling cannot be pursued. The engineer has to make a decision regarding the modeling of this limited and coarse information so that the results of the analysis provide the most suitable basis for conclusions. We approach this problem from the angle of information …