1  20
Next
Number of results to display per page
Online 1. Abstractionbased deductivealgorithmic verification of reactive systems [electronic resource] [2001]
 Uribe Restrepo, Tomás E.
 [Stanford, Calif. : Stanford University, 2001]
 Description
 Book
 Summary

"This thesis presents a framework that combines deductive and algorithmic methods for verifying temporal properties of reactive systems, to allow more automatic verification of general infinitestate systems and the verification of larger finitestate ones. Underlying these methods is the theory of propertypreserving assertionbased abstractions, where a finitestate abstraction of the system is deductively justified and algorithmically model checked. After presenting an abstraction framework that accounts for fairness, we describe a method to automatically generate finitestate abstractions. We then show how a number of other verification methods, including deductive rules, (Generalized) Verification Diagrams, and Deductive Model Checking, can also be understood as constructing finitestate abstractions that are model checked. Our analysis leads to a better classification and understanding of these verification methods. Furthermore, it shows how the different abstractions that they construct can be combined. For this, we present an algorithmic Extended Model Checking procedure, which uses all the information that these methods produce, in a finitestate format that can be easily and incrementally combined. Besides a standard safety component, the combined abstractions include extra bounds on fair transitions, wellfounded orders, and constrained transition relations for the generation of counterexamples. Thus, our approach minimizes the need for user interaction and maximizes the impact of the available automated deduction and model checking tools. Once proved, verification conditions are reused as much as possible, leaving the temporal and combinatorial reasoning to automatic tools."Abstract.
 Collection
 Stanford University, Department of Computer Science, Technical Reports
 Also online at
 Labio, Wilbert Juan.
 [Stanford, Calif. : Stanford University, 2001]
 Description
 Book
 Summary

"Data warehouses collect data from multiple remote sources and integrate the information as materialized views in a local database. The materialized views are used to answer queries that analyze the collected data for patterns, and trends. This type of query processing is often called online analytical processing (OLAP). The warehouse views must be updated when changes are made to the remote information sources. Otherwise, the answers to OLAP queries are based on stale data. Answering OLAP queries based on stale data is clearly a problem especially if OLAP queries are used to support critical decisions made by the organization that owns the data warehouse. Because the primary purpose of the data warehouse is to answer OLAP queries, only a limited amount of time and/or resources can be devoted to the warehouse update. Hence, we have developed new techniques to ensure that the warehouse update can be done efficiently. Also, the warehouse update is not devoid of failures. Since only a limited amount of time and/or resources are devoted to the warehouse update, it is most likely infeasible to restart he warehouse update from scratch. Thus, we have developed new techniques for resuming failed warehouse updates. Finally, warehouse updates typically transfer gigabytes of data into the warehouse. Although the price of disk storage is decreasing, there will be a point in the "lifetime" of a data warehouse when keeping and administering all of the collected is unreasonable. Thus, we have investigated techniques for reducing the storage cost of a data warehouse by selectively "expiring'' information that is not needed.."Abstract.
Online 3. Finding color and shape patterns in images [electronic resource] [2001]
 Cohen, Scott.
 [Stanford, Calif. : Stanford University, 2001]
 Description
 Book
 Summary

"This thesis is devoted to the Earth Mover's Distance (EMD), an edit distance between distributions, and its use within contentbased image retrieval (CBIR). The major CBIR problem discussed is the pattern problem: Given an image and a query pattern, determine if the image contains a region which is visually similar to the pattern; if so, find at least one such image region. An important problem that arises in applying the EMD to CBIR is the EMD under transformation (EMD%5FG) problem: find a transformation of one distribution which minimizes its EMD to another, where the set of allowable transformations G is given. The problem of estimating the size/scale at which a pattern occurs in an image is phrased and efficiently solved as an EMD%5FG problem. For a large class of transformation sets, we also present a monotonically convergent iteration to find at least a locally optimal transformation. Our pattern problem solution is the SEDL (Scale Estimation for Directed Location) image retrieval system. Three important contributions of SEDL are (1) a general framework for finding both color and shape patterns, (2) the previously mentioned scale estimation algorithm using the EMD, and (3) a directed (as opposed to exhaustive) search strategy."Abstract.
 Collection
 Stanford University, Department of Computer Science, Technical Reports
 Also online at
Online 4. Intelligent alarms [electronic resource] : allocating attention among concurrent processes [2001]
 Huang, Cecil.
 [Stanford, Calif. : Stanford University, 2001]
 Description
 Book
 Summary

"I have developed and evaluated a computable, normative framework for intelligent alarms: automated agents that allocate scarce attention resources to concurrent processes in a globally optimal manner. My approach is decisiontheoretic, and relies on Markov decision processes to model timevarying, stochastic systems that respond to externally applied actions. Given a collection of continuing processes and a specified time horizon, my framework computes, for each process: (1) an attention allocation, which reflects how much attention the process is awarded, and (2) an activation price, which reflects the process's priority in receiving the allocated attention amount. I have developed a prototype, Simon, that computes these alarm signals for a simulated ICU. My validity experiments investigate whether sensible input results in sensible output. The results show that Simon produces alarm signals that are consistent with sound clinical judgment. To assess computability, I used Simon to generate alarm signals for an ICU that contained 144 simulated patients; the entire computation took about 2 seconds on a machine with only moderate processing capabilities. I thus conclude that my alarm framework is valid and computable, and therefore is potentially useful in a realworld ICU setting."Abstract.
 Collection
 Stanford University, Department of Computer Science, Technical Reports
 Also online at
5. Multicommodity and generalized flow algorithms [electronic resource] : theory and practice [2001]
 Oldham, Jeffrey David.
 [Stanford, Calif. : Stanford University, 2001]
 Description
 Book
 Summary

"We present several simple, practical, and fast algorithms for linear programs, concentrating on network flow problems. Since the late 1980s, researchers developed different combinatorial approximation algorithms for fractional packing problems, obtaining the fastest theoretical running times to solve multicommodity minimumcost and concurrent flow problems. A direct implementation of these multicommodity flow algorithms was several orders of magnitude slower than solving these problems using a commercial linear programming solver. Through experimentation, we determined which theoretically equivalent constructs are experimentally efficient. Guided by theory, we designed and implemented practical improvements while maintaining the same worstcase complexity bounds. The resulting algorithms solve problems orders of magnitude faster than commercial linear programming solvers and problems an order of magnitude larger. We also present simple, combinatorial algorithms for generalized flow problems. These problems generalize ordinary network flow problems by specifying a flow multiplier \mu(a) for each arc a. Using multipliers permit a flow problem to model transforming one type into another, e.g., currency exchange, and modification of the amount of flow, e.g., water evaporation from canals or accrual of interest in bank accounts. First, we show the generalized shortest paths problem can be solved using existing network flow ideas, i.e., by combining the BellmanFordMoore shortest path framework and Megiddo's parametric search. Second, we combine this algorithm with fractional packing frameworks to yield the first polynomialtime combinatorial approximation algorithms for the generalized versions of the nonnegativecost minimumcost flow, concurrent flow, multicommodity maximum flow, and multicommodity nonnegativecost minimumcost flow problems. These algorithms show that generalized concurrent flow and multicommodity maximum flow have strongly polynomial approximation algorithms."Abstract.
Online 6. Nonblocking synchronization and system design [electronic resource] [2001]
 Greenwald, Michael Barry.
 [Stanford, Calif. : Stanford University, 2001]
 Description
 Book
 Summary

"Nonblocking synchronization (NBS) has significant advantages over blocking synchronization in areas of faulttolerance, system structure, portability, and performance. These advantages gain importance with the increased use of parallelism and multiprocessors, and as delays increase relative to processor speed. This thesis demonstrates that nonblocking synchronization is practical as the sole coordination mechanism in systems by showing that careful OS design eases implementation of efficient NBS, by demonstrating that DCAS (DoubleCompareandSwap) is the necessary and sufficient primitive for implementing NBS, and by demonstrating that efficient hardware DCAS is practical for RISC processors. This thesis presents highperformance nonblocking implementations of common datastructures sufficient to implement an operating system kernel. I also present more general algorithms: nonblocking implementations of \casn\ and software transactional memory. Both have overhead proportional to the number of writes, support multi\objects, and use a DCASbased contentionreduction technique that is faulttolerant and OSindependent yet performs as well as the best previously published techniques. I demonstrate that proposed OS implementations of DCAS are inefficient, and propose a design for efficient hardware DCAS specific to the R4000 but generalizable to other RISC processors."Abstract.
 Collection
 Stanford University, Department of Computer Science, Technical Reports
 Also online at
 Rubner, Yossi, 1964
 [Stanford, Calif. : Stanford University, 2001]
 Description
 Book
 Summary

"The increasing amount of information available in today's world raises the need to retrieve relevant data efficiently. Unlike textbased retrieval, where keywords are successfully used to index into documents, contentbased image retrieval poses up front the fundamental questions how to extract useful image features and how to use them for intuitive retrieval. We present a novel approach to the problem of navigating through a collection of images for the purpose of image retrieval, which leads to a new paradigm for image database search. We summarize the appearance of images by distributions of color or texture features, and we define a metric between any two such distributions. This metric, which we call the "Earth Mover's Distance" (EMD), represents the least amount of work that is needed to rearrange the mass is one distribution in order to obtain the other. We show that the EMD matches perceptual dissimilarity better than other dissimilarity measures, and argue that it has many desirable properties for image retrieval. Using this metric, we employ MultiDimensional Scaling techniques to embed a group of images as points in a two or threedimensional Euclidean space so that their distances reflect image dissimilarities as well as possible. Such geometric embeddings exhibit the structure in the image set at hand, allowing the user to understand better the result of a database query and to refine the query in a perceptually intuitive way. By iterating this process, the user can quickly zoom in to the portion of the image space of interest. We also apply these techniques to other modalities such as mugshot retrieval."Abstract.
8. Segmentation of medical image volumes using intrinsic shape information [electronic resource] [2001]
 Shiffman, Smadar.
 [Stanford, Calif. : Stanford University, 2001]
 Description
 Book
 Summary

"I propose a novel approach to segmentation of image volumes that requires only a small amount of user intervention and that does not rely on prior global shape models. The approach, intrinsic shape for volume segmentation (IVSeg), comprises two methods. The first method analyzes isolabelcontour maps to identify salient regions that correspond to major objects. The method detects transitions from within objects into the background by matching isolabel contours that form along the boundaries of objects as a result of multilevel thresholding with a fine partition of the intensity range. The second method searches in the entire sequence for regions that belong to an object that the user selects from one or a few sections. The method uses local overlap criter ia to determine whether regions that overlap in a given direction (coronal, sagittal, or axial) belong to the same object. For extraction of blood vessels, the method derives the criteria dynamically by fitting cylinders to regions in consecutive sections and computing the expected overlap of slices of these cylinders. In a formal evaluation study with CTA data, I showed that IVSeg reduced user editing time by a factor of 5 without affecting the results in any significant way."Abstract.
 Shiffman, Smadar, author.
 Stanford, Calif. : Stanford University, Dept. of Computer Science, January 1999.
 Description
 Book — xxii, 218 pages : illustrations ; 28 cm.
 Collection
 Stanford University, Department of Computer Science, Technical Reports
 Also online at
Special Collections
Special Collections  Status 

University Archives  Request onsite access 
025617  Inlibrary use 
10. Measurement of the form factor ratios in semileptonic decays of charm mesons [electronic resource] [1998]
 Washington, D.C. : United States. Dept. of Energy. Office of Energy Research ; Oak Ridge, Tenn. : distributed by the Office of Scientific and Technical Information, U.S. Dept. of Energy, 1998
 Description
 Book — 1 online resource (118 p.) : digital, PDF file.
 Summary

I have measured the form factor ratios r<sub>2</sub> = A<sub>2</sub> (0)/A<sub>1</sub> (0) and r<sub>V</sub> = V (0)/A<sub>1</sub> (0) in the semileptonic charm meson decay D<sup>+</sup> → $\bar{K}$<sup>*0</sup> e<sup>+</sup>v<sub>e</sub> from data collected by the Fermilab E791 collaboration. Form factors are introduced in the calculation of the hadronic current in semileptonic decays of strange, charm, or bottom mesons, such as D<sup>+</sup> → $\bar{K}$<sup>*0</sup> e<sup>+</sup> v<sub>e</sub> . Semileptonic decays provide insight into quark coupling to the W boson since the leptonic and hadronic amplitudes in the Feynman diagram for the decay are completely separate. There are no strong interactions between the final state leptons and quarks. A number of theoretical models predict the values of the form factors for D<sup>+</sup> → $\bar{K}$<sup>*0</sup> e<sup>+</sup> v<sub>e</sub> , though there is a large range of predictions. E791 is a hadroproduction experiment that recorded over 20 billion interactions with a 500 GeV π<sup></sup> beam incident on five thin targets during the 199192 Fermilab fixedtarget run. Approximately 3000 D<sup>+</sup> → $\bar{K}$<sup>*0</sup> e<sup>+</sup> v<sub>e</sub> decays are fully reconstructed. In order to extract the form factor ratios from the data, I implement a multidimensional unbinned maximum likelihood fit with a large sample of simulated (Monte Carlo) D<sup>+</sup> → $\bar{K}$<sup>*0</sup> e<sup>+</sup>v<sub>e</sub> events. The large E791 data sample provides the most precise measurement of the form factor ratios to date. The measured values for the form factor ratios are r<sub>2</sub> = 0.71 ± 0.08 ± 0.09 and r<sub>V</sub> = 1.84 ± 0.11 ±} 0.08. These results are in good agreement with some Lattice Gauge calculations. However the agreement with quark model predictions is not as good.
 Online
 Veach, Eric.
 Stanford, Calif. : Stanford University, Dept. of Computer Science, [1998]
 Description
 Book — xxv, 406 pages : ill. ; 28 cm.
 Online
SAL3 (offcampus storage)
SAL3 (offcampus storage)  Status 

For use in Special Collections Reading Room  Request onsite access 
025610  Inlibrary use 
Stacks  Request 
025610  Available 
 Campbell, Keith Eugene.
 Stanford, Calif. : Stanford University, Dept. of Computer Science, [1997]
 Description
 Book — xxiv, 265 pages : illustrations ; 28 cm.
 Online
Media & Microtext Center, SAL3 (offcampus storage)
Media & Microtext Center  Status 

Find it Microfiche collection C  Request 
025596  Inlibrary use 
025596  Inlibrary use 
SAL3 (offcampus storage)  Status 

For use in Special Collections Reading Room  Request onsite access 
025596  Inlibrary use 
Stacks  Request 
025596  Available 
 Sim, Ida.
 Stanford, Calif. : Dept. of Computer Science, Stanford University, 1997.
 Description
 Book — xvi, 121 pages : illustrations ; 28 cm.
Media & Microtext Center, SAL3 (offcampus storage)
Media & Microtext Center  Status 

Find it Microfiche collection C  Request 
025599  Inlibrary use 
025599  Inlibrary use 
SAL3 (offcampus storage)  Status 

For use in Special Collections Reading Room  Request onsite access 
025599  Inlibrary use 
Stacks  Request 
025599  Available 
 Washington, D.C. : United States. Dept. of Energy. ; Oak Ridge, Tenn. : distributed by the Office of Scientific and Technical Information, U.S. Dept. of Energy, 1994
 Description
 Book — 1 online resource (246 p.) : digital, PDF file.
 Summary

The performance of Japanese products in the marketplace points to the dominant role of quality in product competition. Our focus is motivated by the tremendous pressure to improve conformance quality by reducing defects to previously unimaginable limits in the range of 1 to 10 parts per million. Toward this end, we have developed a new model of conformance quality that addresses each of the three principle defect sources: (1) Variation, (2) Human Error, and (3) Complexity. Although the role of variation in conformance quality is well documented, errors occur so infrequently that their significance is not well known. We have shown that statistical methods are not useful in characterizing and controlling errors, the most common source of defects. Excessive complexity is also a root source of defects, since it increases errors and variation defects. A missing link in the defining a global model has been the lack of a sound correlation between complexity and defects. We have used Design for Assembly (DFA) methods to quantify assembly complexity and have shown that assembly times can be described in terms of the Pareto distribution in a clear exception to the Central Limit Theorem. Within individual companies we have found defects to be highly correlated with DFA measures of complexity in broad studies covering tens of millions of assembly operations. Applying the global concepts, we predicted that Motorola`s Six Sigma method would only reduce defects by roughly a factor of two rather than orders of magnitude, a prediction confirmed by Motorola`s data. We have also shown that the potential defects rates of product concepts can be compared in the earliest stages of development. The global Conformance Quality Model has demonstrated that the best strategy for improvement depends upon the quality control strengths and weaknesses.
 Online
Online 15. Consistency results in multiple changepoint problems [1992]
 Venkatraman, E. S.
 March 1992.
 Description
 Book — 1 online resource (138 pages) Digital: text file.
 Also online at

Online 16. On the estimation of fault slip in space and time [1991]
 Matthews, M. V.
 September 1991.
 Description
 Book — 1 online resource (229 pages) Digital: text file.
 Also online at

Online 17. Admissibility results in loss estimation [1990]
 Lele, Chitra.
 August 1990.
 Description
 Book — 1 online resource (83 pages) Digital: text file.
 Also online at

Online 18. Locally smooth density estimation [1985]
 Barron, Andrew R.
 September 1985.
 Description
 Book — 1 online resource (97 pages) Digital: text file.
 Also online at

Online 19. Moment expansions for robust statistics [1982]
 Anderson, K. M.
 1982.
 Description
 Book — 1 online resource (72 pages) Digital: text file.
 Also online at

Online 20. Sequential stochastic construction of random polygons [1982]
 George, E. I.
 1982.
 Description
 Book — 1 online resource (134 pages) Digital: text file.
 Also online at
