1. Haptic exploration of unknown objects [2000]
- Okamura, Allison Mariko.
- 2000.
- Description
- Book — xiv, 128 leaves, bound.
- Online
-
- Search ProQuest Dissertations & Theses. Not all titles available.
- Google Books (Full view)
SAL3 (off-campus storage), Special Collections
SAL3 (off-campus storage) | Status |
---|---|
Stacks | Request (opens in new tab) |
3781 2000 O | Available |
Special Collections | Status |
---|---|
University Archives | Request via Aeon (opens in new tab) |
3781 2000 O | In-library use |
Online 2. Autonomous navigation of a flexible surgical robot in the lungs [2019]
- Sganga, Jake Anthony, author.
- [Stanford, California] : [Stanford University], 2019.
- Description
- Book — 1 online resource.
- Summary
-
Lung cancer is the leading cause of cancer-related death worldwide, and early diagnosis is critical to improving patient outcomes. To diagnose cancer, a highly trained pulmonologist must navigate a flexible bronchoscope deep into the branched structure of the lung for biopsy. The biopsy fails to sample the target tissue in 26-33% of cases largely because of poor registration with the preoperative CT map. If the localization were sufficiently precise, a closed-loop control system could drive the bronchoscope without human intervention. Automation may de-skill standard bronchoscopies, potentially reducing the cost of the procedure with a single pulmonologist monitoring multiple simultaneous procedures. We sought to enable autonomous navigation of the airways by advancing the intraoperative registration methods and the control of the flexible surgical robots. To improve intraoperative registration, we develop three deep learning approaches to localize the bronchoscope in the preoperative CT map based on the bronchoscopic video in real-time, called OffsetNet, AirwayNet, and BifurcationNet. The networks are trained entirely on simulated images derived from the patient-specific CT. The networks are evaluated on recorded bronchoscopy videos in a phantom lung and recorded videos in human cadaver lungs. AirwayNet outperforms other deep learning localization algorithms with an area under the precision-recall curve of 0.97 in the phantom lung, and areas ranging from 0.82 to 0.997 in the human cadaver lungs. To improve the control of flexible surgical robots, we developed an state estimation algorithm that adapted to the unknown contacts of the robot with the lung's environment. Using AirwayNet and the motion controller, we demonstrate autonomous driving in the phantom lung based on video feedback alone. The robot reaches four targets in the left and right lungs in 95% of the trials.
- Also online at
-
- Ong, Carmichael Filbert, author.
- [Stanford, California] : [Stanford University], 2019.
- Description
- Book — 1 online resource.
- Summary
-
Human movement requires complex coordination between the muscular, skeletal, and neural systems. When these systems are impaired, gait pathologies can occur. Previous research has studied how neuromuscular and skeletal deficits result in abnormal, inefficient, gait patterns. However, while these studies have suggested relationships between musculoskeletal parameters and observed gait, they have been limited in understanding the cause-effect relationship between these variables. Other previous work has focused on developing devices to augment human performance, both for individuals with and without impairments, but results have been mixed, likely due to the complex human dynamics and human-device interactions. This thesis describes two musculoskeletal simulation and optimization frameworks that were developed to explore if simulations can be used to 1) probe the cause-effect relationship between muscle deficits and commonly observed pathological gait patterns and 2) help design assistive devices. For each study, the frameworks generated predictive simulations, or simulations in which movement trajectories were created without tracking any experimental data. In the first study, the framework could generate realistic simulations of walking. Plantarflexor muscle weakness or contracture, commonly observed in individuals with stroke or cerebral palsy, was then added to the model, and the model adopted gait patterns that are seen in pathologic gait. In the second study, the framework could generate realistic simulations of a standing long jump. Potential active and passive assistive devices were then added to the model, and the framework tuned the devices to increase jump distance. This work shows how simulation frameworks can be used to predict how movement would change under various conditions, leading to a deeper understanding of mechanisms behind gait pathologies and a framework to aid in designing devices to augment human performance.
- Also online at
-
Online 4. Effects of Latency and Refresh Rate on Force Perception via Sensory Substitution by Force-Controlled Skin Deformation Feedback [2018]
- Zook, Zane Anthony (Author)
- May 14, 2018
- Description
- Book
- Summary
-
Latency and refresh rate are known to adversely affect human force perception in bilateral teleoperators and virtual environments using kinesthetic force feedback, motivating the use of sensory substitution of force. The purpose of this study is to quantify the effects of latency and refresh rate on force perception using sensory substitution by skin deformation feedback. A force-controlled skin deformation feedback device was attached to a 3-degree-of-freedom kinesthetic force feedback device used for position tracking and gravity support. A human participant study was conducted to determine the effects of latency and refresh rate on perceived stiffness and damping with skin deformation feedback. Participants compared two virtual objects: a comparison object with stiffness or damping that could be tuned by the participant, and a reference object with either added latency or reduced refresh rate. Participants modified the stiffness or damping of the tunable object until it resembled the stiffness or damping of the reference object. We found that added latency and reduced refresh rate both increased perceived stiffness but had no effect on perceived damping. Specifically, participants felt significantly different stiffness when the latency exceeded 300 ms and the refresh rate dropped below 16.6 Hz. The impact of latency and refresh rate on force perception via skin deformation feedback was significantly less than what has been previously shown for kinesthetic force feedback.
- Collection
- Undergraduate Theses, School of Engineering
Online 5. Model-less control of continuum manipulators for robot-assisted cardiac ablation [electronic resource] [2015]
- Yip, Michael Chak Luen.
- 2015.
- Description
- Book — 1 online resource.
- Summary
-
Continuum manipulators are designed to operate in constrained environments that are often unknown or unsensed, relying on body compliance to conform to obstacles. The interaction mechanics between the compliant body and unknown environment present significant challenges for traditional robot control technique based on modeling these interactions exactly. This thesis describes a novel model-less approach (i.e. no knowledge of robot mechanics or kinematics) to control continuum manipulators in unknown and constrained environments. In this approach, the controller learns the continuum manipulator Jacobians in real-time and adapts to constraints in the environment autonomously and in a safe manner. Also described is a hybrid position/force scheme, which is useful when interacting with the environment using the end-effector, where tip-constraints can cause the manipulator Jacobian estimates to become ill-conditioned. Under these control strategies, continuum manipulators can safely and effectively interact with the environment, even when these interactions present themselves as arbitrary and unknown constraints. Finally, the model-less control scheme is adapted for operating in a noisy, dynamically disturbed, beating environment. A cardiac ablation is automated to show proof-of-concept autonomous implementation of a cardiac catheterization procedure.
- Also online at
-
Special Collections
Special Collections | Status |
---|---|
University Archives | Request via Aeon (opens in new tab) |
3781 2015 Y | In-library use |
- Howell, Taylor, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
This dissertation takes an optimization-first approach to the development of tools for simulation, planning, and control for robotic systems
- Also online at
-
- Gonzalez, Eric Jordan, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
Virtual reality (VR) systems are inherently limited by their inability to replicate physical reality. Even as technology advances, VR experiences will always be subject to the constraints of the user's hardware setup and external environment. However, the immersive nature of VR systems allows them to create convincing illusions that take advantage of our bias towards believing what we see. Reach redirection is one such illusory technique that influences where users believe their hand is in space. This is done by gradually offsetting the virtual representation of the user's hand during reach. Researchers have used this to alter the perceived properties of real-world objects and enable more physically ergonomic layouts of virtual environments. While there has been considerable research studying the usefulness and perceptibility of redirection, very little focus has been placed on how it works from a sensorimotor perspective. In this thesis, I apply a sensorimotor lens to the study of redirected interactions -- particularly through computational modeling -- to enable more robust and diverse redirection techniques that better handle the complexities of real-world interactions. First, I illustrate how modeling movement duration improves interactions with dynamic encountered-type haptic devices during redirected reaching. Next, I introduce a more adaptable, user-aware approach to redirection using real-time model predictive control. Finally, I present a stochastic sensorimotor simulation of redirected reaching and demonstrate how it can be used to gain insights about the effects of visual attention. Throughout this thesis, I highlight how incorporating sensorimotor control principles can improve the study of redirected reaching and further extend users' experience in VR beyond the physical limitations of reality
- Also online at
-
Online 8. Design and perception of wearable multi-contact haptic devices for social communication [2021]
- Nunez, Cara Mae, author.
- [Stanford, California] : [Stanford University], 2021
- Description
- Book — 1 online resource
- Summary
-
During social interactions, people use auditory, visual, and haptic (touch) cues to convey their thoughts, emotions, and intentions. Current technology allows humans to convey high-quality visual and auditory information but has limited ability to convey haptic expressions remotely. As people interact more through digital means rather than in person, it becomes important to have a way to be able to effectively communicate emotions remotely through touch as well. Systems that convey haptic signals could allow for improved distant socializing and empathetic remote human-human interaction. Due to hardware constraints and limitations in our knowledge regarding human haptic perception, it is difficult to create haptic devices that completely capture the complexity of human touch. This dissertation presents novel methods for the design and control of wearable multi-contact haptic devices, explores human haptic perception with these devices, and highlights how these devices can be used for various forms of social communication. First, we present the design, modeling, and control of two devices that use actuation of discrete contact points to create the illusion of a continuous and pleasant stroking sensation on the arm similar to what is felt during calming and comforting touch between humans. User studies validated that control parameters creating apparent motion with speeds that stimulate the specific touch receptors embedded in the skin which selectively respond to stroking create the most continuous and pleasant tactile stroking sensations. We also present two user studies: one which confirms the realism of the sensation applied and its similarity to human-human social touch and another that explores the effect of spacing between discrete contact points on the illusion of a continuous, pleasant stroking sensation. Second, we explore human haptic perception of touch cues on the forearm. We present a user study in which we directly compare the continuity and pleasantness of the stroking sensation generated via the devices described in the previous chapter and investigate how many contact points are necessary to create the illusion of tactile stroking. We find that you can create the illusion of a continuous and pleasant stroking sensation with as few as four discrete contact points. We also introduce a data-driven method for generating haptic signals that can communicate emotions. We present a user study in which we collected human-human touch data from couples and close friends communicating emotions through touch to the forearm in order to create a naturalistic social touch dataset. We use the touch data to produce haptic signals and validate that the signals can successfully communicate emotions with a multi-contact wearable haptic device. Last, we present the design and control of novel haptic devices that display normal skin deformation to the human back (dorsum) using arrays of soft pneumatic actuators. We targeted our haptic feedback to the back as it is a contact location during hugging interactions that is socially acceptable to test. We introduce the concept of macro-mini pneumatic actuation and how it can be used in wearable devices to help conform the device to the user's body and provide more salient haptic stimuli. A user study validated that participants have lower detection thresholds and can better localize the provided stimuli from macro-mini pouch actuators compared to a more traditional single pneumatic actuator. We discuss how the soft haptic vest containing the array of macro-mini actuators and the results from the user study can be used to replicate human-human hugging interactions. This dissertation presents new designs and control schemes for wearable multi-contact haptic devices for social communication and key insights regarding human haptic perception. The findings and technologies presented in this thesis could serve a variety of applications including social haptic devices for touch therapy, mediated social touch, and teleoperated social-physical human-robot interaction
- Also online at
-
Online 9. Design and control of soft shape-changing robots [2020]
- Usevitch, Nathan Scot, author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
For robots to be useful in the real world, they must be human-safe, adaptable to different tasks, robust to uncertainty, and untethered from external power. Enabling these capabilities requires the codesign of both the physical robot and its controller. In this talk, I will illustrate this approach through the development of a large-scale, shape-changing, inflated soft robot. First, I will describe the control of robotic trusses, or robots that consist of many linear actuators connected at universal joints to form a shape-changing structure. Using techniques from rigidity theory, I present a kinematic model and control methodology that enables control of arbitrary structures. Then, I will demonstrate these controllers in practice for a new type of inflated truss robot called an "Isoperimetric" Truss Robot, or a truss in which the total edge length is conserved. In this case, the vertices of the truss-structure consist of robotic rollers that pinch inflated fabric tubes that comprise the robot's structure, inducing effective joints. The robot changes its shape by driving the rollers along the tube, lengthening one edge and shortening another by moving the joint. As the overall edge length, and hence inflated volume, remains constant no tether to external air is required. The resultant robot can locomote with a punctuated rolling gait, grasp objects using its inherent compliance, operate safely around humans, and can be manually reconfigured based on its task. Last, I will present a distributed control framework for truss-like robots. In this framework each individual actuator utilizes local computation and communication to determine how to act individually to allow the overall robot to accomplish a task. Together these contributions allow for modular, human-safe robots that are capable of controllably changing their shape
- Also online at
-
Online 10. Model-based design and control of deformable robots and haptic devices [2020]
- Koehler, Margaret Irene Schatz, author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
Haptic interfaces are used in training, guidance, and teleoperation to provide force and tactile feedback to a user from a virtual or remote environment. In contrast to the rigid components typically comprising haptic devices, compliant materials could enable new haptic devices that move via deformation rather than via joints. However, the design and control of devices consisting of soft or deformable components is challenging, due to the complex coupling between actuation and end-effector motion and due to a lack of sensing. This thesis considers the potential for computational model-based methods to address challenges in the design and control of deformable haptic devices. In the first half of this thesis, we introduce a new haptic shape display made of soft materials for organ simulation for medical training. The device is a continuous, fully 3D shape-changing surface that a user can touch and hold. We develop a mass-spring model of the device that allows us to understand how different pneumatic actuators affect the shape. Further, we develop an automated design algorithm based on a heuristic controller and a simulation of the device to determine how to arrange a limited number of actuators to reproduce target shapes. This device and its associated algorithms show how a model can be used to understand complex deformations due to pneumatic actuation of a compliant system. The second half of this thesis focuses on the precise control of deformable haptic devices. Two deformable kinesthetic haptic devices are introduced: a 2-DOF planar device and a 5-DOF device. In these devices, forces are transmitted from the actuators to the user via deformable transmissions which allow for easy fabrication and reduced mass and friction in the device. Using these devices, we demonstrate methods for design, sensing, and control. We derive a general formula for the mapping of actuator stiffness to end-effector stiffness and verify it using the planar device. For sensing, we present results in model-based sensing using only actuator information or using model-calibrated embedded sensors. With these techniques, we can use the deformation of the device to measure the force applied to the device without an external force sensor. We extend techniques from rigid robot workspace analysis to deformable robots. Finally, we develop control techniques for haptic rendering which compensate for device mechanics and characterize the resulting forces. Together, these results in design and control show that model-based methods can overcome some of the challenges of deformable devices to enable new haptic interfaces
- Also online at
-
Online 11. Proactive communication for human-robot interaction [2018]
- Che, Yuhang, author.
- [Stanford, California] : [Stanford University], 2018.
- Description
- Book — 1 online resource.
- Summary
-
As robots move from isolated industrial settings into everyday human environments, we need to consider how they should interact with people. Emerging applications range from delivery and support in warehouses, to home and social services. Compared to isolated workspaces, human environments impose more challenges. For example, in social navigation, the robot needs to reach its destination while avoiding people and navigating in a socially compliant way. However, human behaviors are usually complex, stochastic, and hard to predict, which creates difficulties for the robot to plan appropriate actions. To address these challenges, this thesis focuses on the use of simple, proactive haptic communication to facilitate interactions with humans in various scenarios. The core research idea is that, instead of reacting to humans in the environment, the robot should proactively use communication to exchange information and affect human behaviors. This thesis addresses the following topics: (1) the effects of communication on human behavior, (2) mathematical models that predict these effects and human actions, and (3) algorithms to plan for communications that improve efficiency and performance of the human-robot system. We study these topics in the context of three applications. We first present the design and application of a bidirectional communication scheme for a person-following robot. Then we discuss the use of implicit and explicit communication in a mobile robot social navigation scenario. Finally, we present methods for communicating directional information for human navigation guidance.
- Also online at
-
Special Collections
Special Collections | Status |
---|---|
University Archives | Request via Aeon (opens in new tab) |
3781 2018 C | In-library use |
Online 12. Real-time visualization of ionizing radiation for improving quality in radiotherapy [electronic resource] [2017]
- Jenkins, Cesare Hardi.
- 2017.
- Description
- Book — 1 online resource.
- Summary
-
Radiation therapy is an integral part of the current treatment regimen for nearly all cancer patients. The success of radiation therapy depends heavily upon the radiation being delivered in the correct amounts to the correct locations at the correct time. Unfortunately, technological limitations currently require that nearly all radiation therapy treatments are performed without an effective feedback mechanism and typically without any in vivo verification. Rather, the field relies heavily upon careful, but separate, control of radiation delivery and patient positioning. As delivery systems and techniques advance in speed and complexity it is vital that new methods for verifying system performance and, if possible, in vivo verification be developed. This dissertation sets forth a method by which high energy x-rays may be visualized using a simple digital camera. Utilizing such a camera enables simultaneous capture of information about x-rays and the subject with which they are interacting. This concurrent capture of information enables a transformative view of radiation therapy beams and enables several critical applications including treatment monitoring and autonomous quality assurance measurements of both mechanical and dosimetric aspects of machine performance. The approach is first demonstrated in a real-time treatment monitoring application. Here, a flexible scintillating sheet is placed on and allowed to conform to the patient. As the radiation beam traverses the sheet, visible light photons are emitted and captured by a nearby digital camera. Ambient room light reflected from the surface of the patient is also captured by the camera. This concurrent capture of information allows a real-time view of both the patient and the impinging radiation beam. This provides a powerful, in context, view of radiation therapy not previously possible. The system is characterized and shown to operate successfully across a variety of room lighting conditions. The second application for which the approach is used is autonomous mechanical quality assurance measurements for linear accelerators. For this application, a system is designed to autonomously perform several measurements used to evaluate the mechanical accuracy of the linear accelerator such as the agreement between the intended and delivered field size and its alignment with various patient alignment systems. The system consists of a radioluminescent phantom and a collimator mounted digital camera. The system includes a self calibration routine that automatically compensates for variation in phantom setup. System generated measurements are compared to measurements from existing, gold standard, techniques and found to be consistent. Measurement uncertainty is shown to be less than 1 mm and independent of phantom setup. The system is able to perform a set of tests normally taking over 1 hour in approximately 10 minutes. Finally, the approach is evaluated for it's potential to perform dose measurements. A new phantom is designed that includes additional optical calibration fiducials. An image processing workflow is developed to compensate for several potential sources of uncertainty in the measurements. The system is then characterized as a detector and shown to have uncertainties of approximately 1\% for output or single point measurements and 3\% for profile, or spatial, measurements. The results show that the system is invariant to room lighting and camera to phantom pose. The detector exhibits a moderate over-response to field size changes, a characteristic common to phosphor-based planar imaging devices. The system also exhibits a dependence on dose rate, though early investigation indicates this may be an artifact of the image processing algorithm. The applications presented in this dissertation demonstrate a radioluminescent phosphor based beam visualization approach for visualizing, monitoring and evaluating radiation therapy beams. This capability may lead to improved quality in radiotherapy by enabling advanced quality assurance measurements as well as increasing the precision and uniformity of quality assurance measurements while decreasing the time and complexity of performing such measurements. By so doing, it may be possible enable safe, high quality radiation therapy delivery in areas of the world where a lack of trained personnel limits the current quality and availability of care. The technique may also lead to new methods for treatment monitoring, in vivo verification and closed-loop delivery.
- Also online at
-
Special Collections
Special Collections | Status |
---|---|
University Archives | Request via Aeon (opens in new tab) |
3781 2017 J | In-library use |
- Salvato, Millie Aila, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
The COVID-19 pandemic highlighted the value of remote social communication between individuals. The fundamentals of such communication are actively studied for virtual reality interaction, remote video calls, and social networking, but research on these methods infrequently focuses on encoding explicit social use-case. In addition, systems that do incorporate modalities for explicit social information, such as touch, are often hand-tuned or manually generated. In this work, we seek to leverage data-driven methods to improve social interaction between individuals in virtual environments. In order to socialize effectively in shared virtual environments, we seek to learn how a person physically interacts in such environments. For the first contribution of this thesis, we develop the interaction-expectation model to improve hand tracking and interaction. The purpose of this model is to predict hand-object interaction before such interaction occurs. This allows for smooth interaction in virtual environments, which can be used to improve haptic feedback in shared-object settings. We find that we are able to predict human-object interaction before it occurs over short timescales (approximately 100ms). In order to improve social interaction, we must also understand the emotional intent of actions between individuals. In the second contribution of this thesis, we collected a dataset of pairs of individuals interacting to convey affective information through touch. We record the data using a soft pressure sensor on one participants arm. This dataset was collected in a more natural environment than existing ones, and utilized scenario prompts rather than single word prompts. For the third contribution we then develop a system to automatically convey social touch information using our dataset. We develop an algorithm that leverages computer vision algorithms to map from the recorded data to an actuator sleeve with an array of actuators which indent the skin. We find that humans accurately interpret the affective intent of our system with accuracy comparable to human touch interaction. Finally, we consider the visual mode of social experience by improving affective facial expressions for 2D virtual avatars. In recent years socialization via virtual avatars has dramatically increased, with the growing use of 2D drawn, rigged, virtual avatars with face tracking. As a last contribution of this thesis, we create a novel dataset of 2D avatar expressions, of high quality and richer data than previous datasets. We then propose use cases for this dataset to automate the creation of 2D avatars. Through the collection of contributions in this thesis, we seek to push the field of virtual social interaction forward with multi-modal interaction. This will allow people to interact in virtual environments, connect with remote loved ones, and represent a version of themselves online more easily and effectively
- Also online at
-
Online 14. Understanding and learning robotic manipulation skills from humans [2022]
- Galbally Herrero, Elena, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
Humans are constantly learning new skills and improving upon their existing abilities. In particular, when it comes to manipulating objects, humans are extremely effective at generalizing to new scenarios and using physical compliance to our advantage. Compliance is key to generating robust behaviors by reducing the need to rely on precise trajectories. Inspired by humans, we propose to program robots at a higher level of abstraction by using primitives that leverage contact information and compliant strategies. Compliance increases robustness to uncertainty in the environment and primitives provide us with atomic actions that can be reused to avoid coding new tasks from scratch. We have developed a framework that allows us to: (i) collect and segment human data from multiple contact-rich tasks through direct or haptic demonstrations, (ii) analyze this data and extract the human's compliant strategy, and (iii) encode the strategy into robot primitives using task-level controllers. During autonomous task execution, haptic interfaces enable human real-time intervention and additional data collection for recovery from failures. The framework was extensively validated through simulation and hardware experiments, including five real-world construction tasks
- Also online at
-
Online 15. Visual force estimation in robot-assisted minimally invasive surgery [2022]
- Chua, Zonghe, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
Tissue handling and force sensitivity are important skills for surgeons to possess to conduct safe and effective surgery. In robot-assisted minimally invasive surgery (RMIS), where a surgeon teleoperates a multi-arm robot equipped with endoscopic tools without haptic feedback, surgeons rely heavily on visual feedback to estimate the amount of force they are applying to tissue. Thus, good tissue handling and force sensitivity in RMIS is difficult to achieve. To develop general surgical skill, researchers and surgical educators have provided objective performance measurement and multi-sensory training. However, attempts to do this for tissue handling have proven challenging. This dissertation presents work towards teaching and evaluating surgeon tissue handling ability in RMIS by studying how humans and machines can learn to estimate force visually. First, I present an experiment to understand how different forms of prior haptic experience inform a teleoperator's ability to perform visual force estimation for a previously learned task and an unseen task. The results of the experiment show that, for a retraction task on silicone samples, human teleoperators relied on a proprioception heuristic as opposed to a visually informed learned representation of tissue stiffness to perform the task. However, when performing visual force estimation during an unseen silicone palpation task, teleoperators who previously performed the silicone retraction task manually had the best speed-accuracy, suggesting that because they learned the visual force estimation task under the same motion scaling as other manipulations encountered in daily living, they successfully used that prior experience to improve their performance of the unseen task. Having shown that human teleoperators are able to learn how to estimate forces visually after training with haptic and visual feedback, I investigate if neural network systems can perform a similar form of visual force estimation. I present a multimodal neural network and a mock tissue manipulation dataset for performing visual force estimation. I evaluate the multimodal network, and its unimodal variants on their generalization performance over different viewpoints, unseen tools and unseen materials, as well as the contribution of different state inputs to the performance of the networks. I found that vision-based force estimation neural networks can generalize over changes in viewpoints and robot configurations, as well as unseen tools while having faster performance than existing recurrent neural networks. As expected, kinematic information was less useful for estimating force than joint torque or force information in networks that relied on robot state inputs. Including both types of inputs resulted in the best performance. Following this, I show how the neural network-based force estimates perform when used for real-time kinesthetic haptic feedback on an RMIS robot. I present a novel approach which models the teleoperation dynamics and measures stability using a passivity-based metric. Networks that used robot state inputs were more transparent but displayed lower stability compared to a network that only used visual inputs. Due to the inaccessibility of accurate end effector force sensing for RMIS tools, the studies above are limited to one-handed teleoperated manipulation. To address this hardware limitation, I present an open-source three-degree-of-freedom force sensor for bimanual RMIS research applications. I describe the theoretical principles behind the sensor design, as well as the manufacturing approaches that enable the sensors to be easily built and modified by other researchers in the field. I characterize the performance of the force sensor both as a standalone sensor and in a dual jaw set up mounted on an existing RMIS tool. I found that the sensor achieved an accuracy that was below what is detectable by human haptic perception in the range of forces typical of tissue manipulation. The design of the sensor was shown to be robust to manufacturing variations, maintaining desired accuracy over two separate builds of the sensor. When used in a dual jaw configuration, the sensor was also capable of measuring grip force in the range used for delicate tissue manipulation
- Also online at
-
Online 16. Design and control of soft truss robots [2021]
- Hammond, Zachary Michael, author.
- [Stanford, California] : [Stanford University], 2021
- Description
- Book — 1 online resource
- Summary
-
The shape-changing ability of truss robots has promising implications for the adaptability of robotic systems to a range of tasks and environments. Truss robots are a class of robots with truss-like structures in which members can change in length to effect global shape change of the robot. Typically, truss robots are networks of interconnected linear actuators joined at universal joints. Simulations of these systems have imagined interesting behaviors such as locomotion through unstructured environments, shrinking in volume to fit in tight spaces, shoring rubble, or conveying information to a human through their form. A number of researchers have been compelled by this truss robotic concept to build physical embodiments. However, these systems have proven difficult to realize. One of the most significant challenges is designing custom linear actuators that have high extension ratios. This is an important characteristic because the extension ratio of the actuators, together with the structural topology, defines the range of shapes that can be assumed. A common approach to high-extension actuation is to cascade multiple low-extension actuators (like a lead screw actuator) together. This has worked at the cost of added complexity but exacerbates the second challenge: resiliency. Due to the rigidity and complexity of many of these actuation solutions, they are often prone to failure. Many of them lack a mechanism to absorb and dissipate energy and will break or jam after exposure to high impact forces. Furthermore, this rigidity can limit truss robots from certain behaviors like safe operation in the presence of people, leveraging mechanical intelligence to reduce control complexity, and direct interaction with objects in the environment. Compliance in truss robots may enable new modes of interaction with the outside world that rigid truss robots are not well suited for. One compelling interaction that can leverage the compliance of a truss robot is the grasping and manipulation of objects. A compliant truss robot can change shape to engulf an object, grasp it between two or more members, and manipulate it within the robot's environment. Similar to soft grippers, the compliance of the structure affords large contact areas with even force distribution, allowing for successful grasping with imprecise open-loop control. In this dissertation, we present methods for high-extension and compliant actuation in truss robots and explore how the compliance can be utilized for unique behaviors. First, we develop a high-extension and compliant pneumatic linear actuator, called the pneumatic reel actuator, that is designed to be used within a truss robot. The actuator consists of an inflated tube that is stored within a reel. As the tubing inflates, the flexible, but mostly inextensible, tubing forms into a cylindrical beam with significantly increased stiffness. As the volume of air inside the actuator rises, more of the tubing is pulled out of the reel to form the beam---lengthening the actuator and storing energy in the springs. Next, the insight gained from the development of that actuator facilitates a shift in our design strategy that enables a full large-scale truss robot with compliant elements capable of operating untethered from a source of compressed air or energy. This robot is composed of inflatable, constant-length tubes that are manipulated by a collective of simple robotic roller modules to form a truss-like structure that can change shape without a pressure source. This shape change can be applied to locomotion, physical interaction with the environment, and the engulfing, grasping, and manipulation of objects. Finally, we explore deeper the grasping and manipulation of objects with these compliant truss robots. The compliance of the members affords large contact areas with even force distribution, allowing for successful grasping even with imprecise open-loop control. We present methods of analyzing and controlling isoperimetric truss robots in the context of grasping and manipulating objects
- Also online at
-
Online 17. Design, modeling, and control of vine robots for exploration of unknown environments [2021]
- Coad, Margaret Mary, author.
- [Stanford, California] : [Stanford University], 2021
- Description
- Book — 1 online resource
- Summary
-
Robots have great potential to be our eyes and hands in spaces too small or dangerous for humans to enter. Minimally invasive surgery, urban search and rescue, and water pipeline inspection are examples of applications where such robots could improve human health, safety, or productivity. However, most of today's robots are unsuitable for practical use in these applications, in large part because their physical bodies lack the necessary properties to navigate and manipulate the environment in these spaces. This dissertation focuses on a relatively unexplored robotic paradigm--robotic movement via plant-like tip-growth--and its application for exploration of unknown environments. In particular, we study soft growing "vine robots, " which lengthen from the tip by turning their body material inside out using internal fluid pressure, and are well suited for exploration of small spaces. First, we present research on design and human-in-the-loop control of vine robots, which allowed us to successfully use them in the field to explore small tunnels in an archeological site in Chavin, Peru. Then, we present work on design, modeling, and control to expand the capabilities of these robots in three areas: enabling controlled reversal of growth, transmitting pulling forces to the environment from the robot tip, and sensing both the environment around the robot and the robot's own shape
- Also online at
-
Online 18. Enhancing electromagnetic performance of metal devices : from nanoscale materials studies to device optimization and macroscale reconfiguration [2021]
- Gan, Lucia Tang, author.
- [Stanford, California] : [Stanford University], 2021
- Description
- Book — 1 online resource
- Summary
-
The interaction of electromagnetic waves with metals results in a rich diversity of physical phenomena across the electromagnetic spectrum. Metals are ubiquitous to many applications ranging from antennas for radio frequency communication to plasmonic waveguides designed for subwavelength light confinement, and there exist many varying strategies for improving the performance of these metallic electromagnetic devices ranging from microwave to optical frequencies. This thesis presents five main studies that are linked to the enhancement of electromagnetic performance of metal devices, from investigations of nanoscale metal crystal structure all the way to large-scale metallic antenna reconfiguration. First, we introduce a fabrication technique termed metal-on-insulator rapid melt growth. We demonstrate that rapid melt growth is a high-throughput, high-yield crystal growth process that can be harnessed to produce gold bicrystals on a silicon dioxide substrate to enable fundamental grain boundary studies. Next, we investigate the mechanical properties of gold grain boundaries with tailored misorientation angles produced via rapid melt growth. Through in situ transmission electron microscope tensile testing, we found that grain boundaries are inherently strong and failure occurs transgranularly via twin formation or plastic collapse. Next, we report that inhomogeneous distributions of platinum impurities in nanoscale structured gold has a large impact on the thermoelectric properties of gold. As well, in the absence of platinum, the photothermoelectric response of gold is sensitive to subtle changes in crystal structure, indicating that certain lattice defects should be carefully considered in thermoelectric devices. Next, we demonstrate that inverse design methods can be used to improve the coupling efficiency of plasmonic fiber-to-slot grating couplers through a boundary optimization algorithm. Additionally, we design grayscale, multilayer dielectric metamaterials operating in the radio frequency regime with improved functionality using topology optimization and additive manufacturing techniques. Lastly, we demonstrate the use of pneumatically-driven soft continuum robots to construct deployable monopole antennas that are capable of controlling operating frequency. As well, we show that soft robotic actuation strategies can be exploited to actively control the polarization state of a helical antenna
- Also online at
-
Online 19. Sensorimotor control of lower-limb assistive devices [2021]
- Welker, Cara Gonzalez, author.
- [Stanford, California] : [Stanford University], 2021
- Description
- Book — 1 online resource
- Summary
-
Human sensorimotor control is extremely complex. Because of this, it remains an open question how best to control lower-limb devices, such as exoskeletons and robotic prostheses, that seek to assist in human movement. A common approach is to attempt to replicate average biological joint torques, but this does not incorporate individualized differences in movement or the sensorimotor control loop that allows humans to adjust motor commands in response to sensory feedback. My dissertation comprises three different approaches to device control, with each subsequent approach providing greater incorporation of user input. First, I discuss a passive assistive device for running that is successful in reducing human metabolic cost due to gait adaptation to the device and the complex and interconnected biomechanics of running. Second, I present an approach for customizing the control parameters of robotic prostheses for individuals with lower-limb amputation. Finally, I present a system that closes the sensorimotor control loop of robotic prostheses, in which an individual with amputation teleoperates their own prosthetic ankle and receives sensory feedback regarding its behavior via a wearable wrist exoskeleton. This work contributes to the development of future lower-limb assistive devices not only through novel device control strategies but also through insights regarding human interaction with assistive devices
- Also online at
-
Online 20. Electrosoft : soft electrostatic technologies for cutaneous interaction [2020]
- Han, Kyung Won, author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
Traditional robots use rigid materials that allow for precise fabrication and controls. However, soft robots have drawn considerable attention in recent years because they are inherently safe for human interaction. Although it is more difficult for soft robots to achieve large forces or precise motion control, they outperform traditional robots in certain criteria like being lightweight, human safe, and suitable for handheld or wearable devices. Therefore, many researchers have focused on these aspects and have developed soft, small, and lightweight actuators. However, this field still faces challenges in providing enough displacement and force in a compact package and in reducing power consumption. Electrostatic actuation is a promising solution for achieving compactness and low power consumption in soft robots. This combination of electrostatic actuation and soft robotic structures underlies the work presented in this thesis, which has applications in cutaneous interaction for humans and robots, namely haptics and gripping. For haptic display, the main challenges include miniaturization and obtaining adequate bandwidth to display stimuli of interest. I present two different applications that take advantage of the unique capabilities of soft electrostatic actuators. Cutaneous haptic feedback devices were developed to impart sensations to the fingertip skin in a teleoperated or virtual environment. First, I present a stackable electroactive polymer (EAP) actuator designed to actuate a small handheld haptic device. This device communicates the needle tip forces to physicians during teleoperated image-guided needle interventions. Tests confirm that the device is magnetic resonance (MR) compatible. Tests with human subjects explored robotic and teleoperated paradigms to detect when the needle punctured a silicone membrane. In each paradigm, users detected membranes embedded in tissue phantoms with 98.9% and 98.1% success rates, respectively. Second, I designed a miniature dielectric fluid transducer (DFT) that has a large strain and fast response. The actuators can be packed closely and controlled individually to create dynamic texture displays, suitable for active surface exploration with fingertips. The simulation results show that the width of the actuator can be reduced without affecting the performance, which is useful for miniaturizing the device. Tests with human subjects show that users differentiated simple bump patterns with a 98.8% success rate. In soft gripping, a challenge is to find ways to enhance adhesion, which depends on the area of contact between gripping surfaces. Again, I present solutions that combine soft robotics and electrostatic actuation to achieve a synergistic effect. I developed electrostatic gecko-inspired adhesive technology robotic end-effectors, particularly for two-armed mobile robots, to minimize their force and power consumption, and expand their ability to handle bulky objects. Tests show that (i) the hybrid of electrostatic and van der Waals adhesion surpasses each technology's separate performance, (ii) it reduces the force and power needed to manipulate large objects, and (iii) a wide range of shapes and surfaces can be handled with the hybrid adhesive technology. In summary, I developed soft electrostatic technologies that are MR-compatible, compact, lightweight, and have low power consumption. Using these principles, I designed and characterized different actuators to overcome specific challenges in haptics and gripping. In each case the combination of electrostatics and soft materials -- ``electrosoft'' technology, to coin a term -- match favorably with the requirements of soft, cutaneous human-robot interaction
- Also online at
-