RUDN Journal of Language Studies, Semiotics and Semantics, Vol 11, Iss 1, Pp 36-47 (2020)
arabic language, marking, uncertainty, nomination, semantic field, system, Language. Linguistic theory. Comparative grammar, P101-410, Semantics, and P325-325.5
The article studies the peculiarities of verbalizing the category of ambiguity on the material of English literary texts translated into Arabic. Seven texts of writers-postmodernist - J. Barnes, 1. McCarthy, I. McEwan, D. Lodge, D. Mitchell, were chosen to be analyzed. The subject of the category of ambiguity research is both logic and philosophical, and linguistic. Lexemes denoting ambiguity are described in terms of their belonging to semantic (thematic) fields, including their contrastive and stylistic properties. There are involved proper lexical units denoting ambiguity, and contextual, occasional means, while their dictionary definitions do not reveal the semes of ambiguity. The study deals with the role and functions of nominative units reflecting ambiguity and uncertainty of the world contemporary literary discourse through translation into Arabic. The methodology is based on the functional interaction of lexis and grammar as one of the systemic linguistics principles. The study conclusions proceed from the provision on the Arabic language to demonstrate the system of various lexical means to express the ambiguity category, and their determinant to be implied in paradigmatic relations of language system, and syntagmatic relations between textual semantic units which both explain grammar structure of language and the nature of semantic correlations in its lexical subsystem. The authors draw a conclusion that both English and Arabic languages possess universal extralinguistic meaning of nominative ambiguity, while the semantic field nucleus fulfils the crucial function to select and assort proper means and units to realize the ambiguity category in texts. Differentiation of ambiguity nominations according to their application is not homogeneous due to lexical nominations making up the main means to realize ambiguity principle as both semantic and grammatical category. In course of studying the topic issue it seems adequate to study the topic of ambiguity conceptualization in languages of different structure and arrange the means to verbalize the ambiguity concept using the method of systemic comparison.
Abstract This research work intends to analyze the association between real earnings manipulation and stock price crash. Further, we also analyze the spillover outcome of the crash as a result of applying real earnings management. It is hypothesized that there is a positive and statistically significant association between real activities manipulation and crash risk. It is also assumed that this spillover outcome is more noticeable during uncertainty. By applying data of family firms for the time period 2005–2018, empirical results provide the proof that real manipulation has a significant impact on stock crash for a developing economy like Pakistan among family-based companies. This research work also gives a statistical insight that spillover outcome is more notable for firms facing uncertainty. Our statistical estimations are in support of the assumed hypotheses of the study. This study has very significant and practical implications for academic researchers, standard setters, and investors.
Peng Wang, Dan Wang, Chengliang Zhu, Yan Yang, Heba M. Abdullah, and Mohamed A. Mohamed
Energy Reports, Vol 6, Iss , Pp 1338-1352 (2020)
Hybrid AC/DC microgrids, Uncertainty, Electric vehicles, Optimization, Charging patterns, Flower pollination algorithm, Electrical engineering. Electronics. Nuclear engineering, and TK1-9971
The high growth of the automotive industry reveals the very bright future of this technology and its high penetration effects on the human society. No doubt that the random and volatile charging demand of these devices would affect the power grid optimal operation and scheduling which may be regarded as a new challenge. Therefore, this paper investigates the stochastic scheduling of hybrid AC/DC microgrids considering the plugin hybrid electric vehicles charging demands, distributed all over the grid. Three different charging patterns, called coordinated, uncoordinated and smart charging models with different characteristics for the charger type, capacity and market share are proposed. Moreover, different types of renewable energy sources including wind turbine, solar panel and fuel cell are modeled and considered in the scheduling process of the hybrid microgrid. In order to mitigate the charging effects of electric vehicles on the hybrid AC–DC microgrid operation, some remotely switches are considered in the system which make it possible for changing the topology and power flow way. In order to model the uncertainty effects, a data-driven framework based on point estimate method and support vector machine is developed. This would make it possible to extract out the standard deviation value of the uncertain parameters and reflect their impacts on the microgrid operation problem through the limited concentration points. A novel evolving solution based on flower pollination algorithm is also proposed to solve the problem optimally. An IEEE standard test system is used as the hybrid AC/DC microgrid case study to assess the performance of proposed model.
Guo Wang, Qinqin Wang, Zhi Qiao, Jiuhui Wang, and Simon Anderson
Energy Reports, Vol 6, Iss , Pp 1233-1249 (2020)
Micro-grid, Smart distribution grids, Distributed generators, Uncertainty, Probabilistic modeling, Electrical engineering. Electronics. Nuclear engineering, and TK1-9971
Traditional electrical grid as the largest and most complex industry in the world is currently under fundamental development. According to the IEEE Std1547.4, large distribution systems can be clustered into a number of micro-grids (MGs) to facilitate the control and operation infrastructure in future distribution systems. A distribution network with distributed energy resources serving a group of loads, can be operated as a MG, in grid-connected or isolated grid modes. Most recently, the concept of MGs has become an important subject in smart grid area, demanding a systematic method for their optimal planning. The prosperous development of the micro-grid concept implies the definition of an appropriate regulation for its integration on distribution systems. The MG operators are responsible for a reliable energy supply to their consumers. MG formation represents a suitable solution for uninterrupted supplying of critical loads, in the absence of network.In this study, a systematic approach is presented for optimal construction of micro-grid. The problem is solved in two stages. In the first stage, the network is designed as an integrated and active distribution network considering the importance of reliability by using Backward–forward load flow. Next, the distribution network is designed to be divided in to several MGs in order to minimize the cost of electricity generation, improve reliability and voltage profile. In the presented study, stochastic models are considered for representing generation availability related to wind energy and solar energy. The generated power of distributed generation resources such as wind turbines and photovoltaic, have been modeled per hours of the day and the amount of important network parameters are obtained from the results of probabilistic load flow. Practical and important factors such as problem solving space reduction, load controllability, load priority, bus voltage limitation, line capacity limitation and the possibility of forming larger micro-grids by connecting switches between the obtained initial MGs from the first stage are considered in the proposed method.The proposed methodology is implemented for a standard 69-bus distribution system by applying the Imperialist Competitive Algorithm (ICA) in MATLAB software. Results verify the usefulness of the proposed approach in transforming an existing radial distribution network into several autonomous MG.
Catarina Pires, Marília Barandas, Letícia Fernandes, Duarte Folgado, and Hugo Gamboa
Machine Learning and Knowledge Extraction, Vol 2, Iss 28, Pp 505-532 (2020)
uncertainty, machine learning, open set recognition, entropy, out-of-distribution, Computer engineering. Computer hardware, and TK7885-7895
Uncertainty is ubiquitous and happens in every single prediction of Machine Learning models. The ability to estimate and quantify the uncertainty of individual predictions is arguably relevant, all the more in safety-critical applications. Real-world recognition poses multiple challenges since a model’s knowledge about physical phenomenon is not complete, and observations are incomplete by definition. However, Machine Learning algorithms often assume that train and test data distributions are the same and that all testing classes are present during training. A more realistic scenario is the Open Set Recognition, where unknown classes can be submitted to an algorithm during testing. In this paper, we propose a Knowledge Uncertainty Estimation (KUE) method to quantify knowledge uncertainty and reject out-of-distribution inputs. Additionally, we quantify and distinguish aleatoric and epistemic uncertainty with the classical information-theoretical measures of entropy by means of ensemble techniques. We performed experiments on four datasets with different data modalities and compared our results with distance-based classifiers, SVM-based approaches and ensemble techniques using entropy measures. Overall, the effectiveness of KUE in distinguishing in- and out-distribution inputs obtained better results in most cases and was at least comparable in others. Furthermore, a classification with rejection option based on a proposed combination strategy between different measures of uncertainty is an application of uncertainty with proven results.
A network of pointwise available height anomalies, derived from levelling and GPS observations, can be densified by adjusting a gravimetric quasigeoid using least-squares collocation. The resulting type of Corrector Surface Model (CSM) is applied by Norwegian surveyors to convert ellipsoidal heights to normal heights expressed in the official height system NN2000. In this work, the uncertainty related to the use of a CSM to predict differences in height anomaly was sought. As previously, the application of variograms to determine the local statistical properties of the adopted collocation model led to predictions that were consistent with their computed uncertainties. For the purpose of predicting height anomaly differences, the effect of collocation was seen to be moderate in general for the small spatial separations considered (< 10 km). However, the relative impact of collocation could be appreciable, and increasing with distance, near the network. At last, it was argued that conservative uncertainties of height anomaly differences may be obtained by rescaling output of a grid interpolation by Δ\sqrt \Delta, where Δ is the spatial separation of the two locations for which the difference is sought.
The paper presents the results of the research of performance measurement of a selected aircraft type in the take-off phase under extreme temperature conditions. For this purpose, a flight simulator of the Cessna 172 RG aircraft from the ELITE Company was used. For the purpose of verifying the take-off run length, the article provides a measurement methodology that was developed using information obtained during experimental take-offs. The aim was to obtain a procedure that would allow for repeated take-off runs in the same conditions with the possibility of changing individual influencing factors. Considering the whole measurement chain, the article analyses the influencing factors and quantifies their impact on the uncertainty of the measurement result. The data obtained experimentally we compared with the data in the Flight Manual and at the end carried out the assessment of the impact of global warming on the take-off run of the Cessna 172 RG and generally on the safety of the take-off and on air transport..
The nitrogen in nitrogen-doped titanium dioxide bacterial agent was detected by elemental analyzer,and the measurement uncertainty was evaluated.The sources of uncertainty among the procedures were analyzed and each component of uncertainties was evaluated.The combined and expanded uncertainties were given.The results showed that the uncertainty can be expressed as(1.21±0.16)%, k=2, and the calibration curve has the greatest influence on the uncertainty.
Stefania Manetti, Giuseppe Turchetti, and Francesco Fusco
BMC Health Services Research, Vol 20, Iss 1, Pp 1-11 (2020)
Cost-effectiveness analysis, Early health technology assessment, Value of information, Uncertainty, Elicitation, Public aspects of medicine, and RA1-1270
Abstract Background Falls may lead to hip fractures, which have a detrimental effect on the prognosis of patients as well as a considerable impact on healthcare expenditures. Since a secondary hip fracture (SHF) may lead to even higher costs than primary fractures, the development of innovative services is crucial to limit falls and curb costs in high-risk patients. An early economic evaluation assessed which patients with a second hip fracture could benefit most from an exoskeleton preventing falls and whether its development is feasible. Methods The life-course of hip fractured patients presenting with dementia or cardiovascular diseases was simulated using a Markov model relying on the United Kingdom administrative data and complemented by published literature. A group of experts provided the exoskeleton parameters. Secondary analyses included a threshold analysis to identify the exoskeleton requirements (e.g. minimum impact of the exoskeleton on patients’ quality of life) leading to a reimbursable incremental cost-effectiveness ratio. Similarly, the uncertainty around these requirements was modelled by varying their standard errors and represented alongside population Expected Value of Perfect Information (EVPI). Results Our base-case found the exoskeleton cost-effective when providing a statistically significant reduction in SHF risk. The secondary analyses identified 286 cost-effective combinations of the exoskeleton requirements. The uncertainty around these requirements was explored producing further 22,880 scenarios, which showed that this significant reduction in SHF risk was not necessary to support the exoskeleton adoption in clinical practice. Conversely, a significant improvement in women quality of life was crucial to obtain an acceptable population EVPI regardless of the cost of the exoskeleton. Conclusions Our study identified the exoskeleton requisites to be cost-effective and the value of future research. Decision-makers could use our analyses to assess not only whether the exoskeleton could be cost-effective but also how much further research and development of the exoskeleton is worth to be pursued.
Вестник университета, Vol 0, Iss 9, Pp 49-53 (2020)
efficiency, evaluation methods, finished products, forecast, indicator system, logistic strategy, sales management, uncertainty, Sociology (General), HM401-1281, Economics as a science, and HB71-74
The main approaches to evaluation of efficiency of logistic strategy of the organization s finished product sales management have been reviewed. The features of traditional and innovative methods of evaluation have been revealed: assessment based on investment efficiency; evaluation of the strategy based on a system of balanced indicators; the method of expert assessments; assessments based on simulation modeling; multi-criteria evaluation methods; combined forecast method. According to the provided information, the main formation disadvantages of the assessment method and its actual implementation have been highlighted: a low degree of formalization of the model; subjectivity of initial data; the lack of indicators taking into сonsideration influence of the external environment; the lack of associated risk assessment; the difficulty of constructing evaluation models, only some key parameters are taken into account.
Bishwajit Dey, Biplab Bhattacharyya, Saurav Raj, and Rohit Babu
Journal of Electrical Systems and Information Technology, Vol 7, Iss 1, Pp 1-26 (2020)
Microgrid, Uncertainty, Grey wolf optimizer, Sine–cosine algorithm, Crow search algorithm, Electrical engineering. Electronics. Nuclear engineering, TK1-9971, Information technology, and T58.5-58.64
Abstract Economic emission dispatch (EED) of a three-unit stand-alone microgrid system supported by a wind farm is percolated in this paper. The adverse effects of stochastic and uncertainty nature of wind energy in raising the generation cost of the microgrid system are studied in this article. Unit commitment (UC) of the generating units is taken into account which helps in reducing the generation cost and provides relaxation time to the generation units. Three cases are contemplated for the study. For the first two cases, the generation cost of the test system was minimized without and with the involvement of wind power, respectively. The third case considered the involvement of wind power along with the UC of the conventional generation units. A novel hybrid of recently developed superior optimization algorithms, viz. grey wolf optimizer (GWO), sine–cosine algorithm (SCA) and crow search algorithm (CSA), is implemented to perform EED, and the results are compared with basic GWO and other hybrid algorithms. Results are then analysed to compare and contrast among these cases and justify the reliable and profitable one. Statistical analysis claims the superiority of the proposed hybrid MGWOSCACSA over other hybrids and GWO.
Abstract In the past decades, the rapid growth of computer and database technologies has led to the rapid growth of large-scale datasets. On the other hand, data mining applications with high dimensional datasets that require high speed and accuracy are rapidly increasing. Semi-supervised learning is a class of machine learning in which unlabeled data and labeled data are used simultaneously to improve feature selection. The goal of feature selection over partially labeled data (semi-supervised feature selection) is to choose a subset of available features with the lowest redundancy with each other and the highest relevancy to the target class, which is the same objective as the feature selection over entirely labeled data. This method actually used the classification to reduce ambiguity in the range of values. First, the similarity values of each pair are collected, and then these values are divided into intervals, and the average of each interval is determined. In the next step, for each interval, the number of pairs in this range is counted. Finally, by using the strength and similarity matrices, a new constraint feature selection ranking is proposed. The performance of the presented method was compared to the performance of the state-of-the-art, and well-known semi-supervised feature selection approaches on eight datasets. The results indicate that the proposed approach improves previous related approaches with respect to the accuracy of the constrained score. In particular, the numerical results showed that the presented approach improved the classification accuracy by about 3% and reduced the number of selected features by 1%. Consequently, it can be said that the proposed method has reduced the computational complexity of the machine learning algorithm despite increasing the classification accuracy.