Clinical trials, Judgment (Psychology), Medical research, Health outcome assessment, Uncertainty, Proportional hazards models, and Statistical models
Abstract
Background: Uncertain ascertainment of events in clinical trials has been noted for decades. To correct possible bias, Clinical Endpoint Committees (CECs) have been employed as a critical element of trials to ensure consistent and high-quality endpoint evaluation, especially for cardiovascular endpoints. However, the efficiency and usefulness of adjudication have been debated. Methods: The multiple imputation (MI) method was proposed to incorporate endpoint event uncertainty. In a simulation conducted to explain this methodology, the dichotomous outcome was imputed each time with subject-specific event probabilities. As the final step, the desired analysis was conducted based on all imputed data. This proposed method was further applied to real trial data from PARADIGM-HF. Results: Compared with the conventional Cox model with adjudicated events only, the Cox MI method had higher power, even with a small number of uncertain events. It yielded more robust inferences regarding treatment effects and required a smaller sample size to achieve the same power. Conclusions: Instead of using dichotomous endpoint data, the MI method enables incorporation of event uncertainty and eliminates the need for categorizing endpoint events. In future trials, assigning a probability of event occurrence for each event may be preferable to a CEC assigning a dichotomous outcome. Considerable resources could be saved if endpoint events can be identified more simply and in a manner that maintains study power. [ABSTRACT FROM AUTHOR]
Invariant measures, Uncertainty, Confidence intervals, and Statistical bootstrapping
Abstract
Journal rankings often show significant changes compared to previous rankings. This gives rise to the question of how well estimated the rank of a journal is. In this contribution, we consider uncertainty in a ranking of economics journals. We use the invariant method of Pinski and Narin to rank the journals. We propose an uncertainty measure, which is based on a bootstrap approach. The measure is the average absolute change in rank, which we see as a reasonable uncertainty measure regarding rankings. We further calculate, based on the bootstrap method, 95% confidence interval for the observed values of the invariant method. We show that ranks of the highest, as well as the lowest, ranked journals are well estimated, while there is a high degree of uncertainty regarding the rank of many mid-ranked journals. The distribution of the underlying measure is useful for identifying groups of journals that are more or less of the same quality (from the point of view of the invariant measure). The journal with the highest observed value of the invariant measure, Journal of Political Economy, has the best performance and constitutes a singleton, whereas Quarterly Journal of Economics and Econometrica form the next group (there is a slight overlap between the two with respect to confidence intervals). The journals ranked between about 190–230 form another group in which there are no major quality differences between the journals, as the confidence intervals are overlapping. [ABSTRACT FROM AUTHOR]
Classification, Experimental design, Epistemic uncertainty, Uncertainty, and Forecasting
Abstract
Deep neural networks have recently achieved impressive performance on multilabel text classification. However, the uncertainty in multilabel text classification tasks and their application in the model are often overlooked. To better understand and evaluate the uncertainty in multilabel text classification tasks, we propose a general framework called Uncertainty Quantification for Multilabel Text Classification framework. Based on the prediction results produced by traditional neural networks, the aleatory uncertainty of each classification label and the epistemic uncertainty of the prediction result can further be obtained by this framework. We design experiments to characterize the properties of aleatory uncertainty and epistemic uncertainty from the data characteristics and model features. The experimental results show that this framework is reasonable. Furthermore, we demonstrate how this framework allows us to define the model optimization criterion to identify policies that balance the expected training cost, model performance, and uncertainty sensitivity. This article is categorized under:Algorithmic Development > Bayesian Models [ABSTRACT FROM AUTHOR]
Contracts, Common law, Uncertainty, Property law reform, and Property rights
Abstract
The perpetual script of a smart contract, that executes an agreement machine-to-machine without prejudice, guarantees performance of 'contractual terms' enabling the exchange or transaction of cryptoassets and other forms of property. Yet, smart contracts as recognisable or valid legal instruments within the boundaries of contract or property law remain uncertain and contentious. Contrary to perceptions of contractual streamlining and efficiency, understanding the uncertainty smart contracts produce lies in the technology's failure to meet many of the fundamental principles of contract law and theory concerning, for example, breach of promise and remedy for breach. Smart contracts appear to reduce contracting to a form and standard well below that developed by contract law and theory over many centuries in both civil and common law jurisdictions. Including elements of the law of restitution, this article's remedial analysis will examine smart contracts considering 'traditional' contract law to understand and, where possible, test the legal legitimacy of this post-human technology, and explore the potential of smart contracts to supplement or, in time, supersede traditional contract law. [ABSTRACT FROM AUTHOR]