1 - 15
Number of results to display per page
1 - 15
Number of results to display per page
- Wang, Ge, 1977- author, designer.
- Stanford, CA : Stanford University Press, [2018]
- Description
- Book — 487 pages : chiefly color illustrations ; 26 cm
- Summary
-
What we make, makes us. This is the central tenet of Artful Design, a photorealistic comic book that examines the nature, purpose, and meaning of design. A call to action and a meditation on art, authenticity, and social connection in a world disrupted by technological change, this book articulates a fundamental principle for design: that we should design not just from practical needs but from the values that underlie those needs. Artful Design takes readers on a journey through the aesthetic dimensions of technology. Using music as a universal phenomenon that has evolved alongside technology, this book breaks down concrete case studies in computer-mediated toys, tools, games, and instruments, including the best-selling app Ocarina. Every chapter elaborates a set of general design principles and strategies that illuminate the essential relationship between aesthetics and engineering, art and design. Ge Wang implores us to both embrace and confront technology, not purely as a means to an end, but in its potential to enrich life. Technology is never a neutral agent, but through what we do with it-through what we design with it-it provides a mirror to our human endeavors and values. Artful Design delivers an aesthetic manifesto of technology, accessible yet uncompromising.
(source: Nielsen Book Data)
- Online
Green Library, Music Library
Online 2. Opportunity Aligned with Interests [2012]
- Wang, Ge, 1977- (Speaker)
- Stanford (Calif.), October 3, 2012
- Description
- Video — 1 digital video file
- Summary
-
Smule Co-Founder Wang, Ge tells the origin story of his company, which seeks to connect the world through music and expression. Wang describes himself as an "accidental entrepreneur" who chose to start a company when he and his co-founder saw alignment between their research interests and emerging opportunities in the mobile space.
- Collection
- Stanford Technology Ventures Program, Entrepreneurial Thought Leaders Seminar, videorecordings
Online 3. When Technology Disappears and Creates Calm [2012]
- Wang, Ge, 1977- (Speaker)
- Stanford (Calif.), October 3, 2012
- Description
- Video — 1 digital video file
- Summary
-
Smule Co-Founder Wang, Ge discusses how the most profound technologies eventually disappear into the background of daily life. Building on ideas of the late Mark Weiser, former chief scientist at Xerox PARC, Wang explains the importance of technology's ability to create calm.
- Collection
- Stanford Technology Ventures Program, Entrepreneurial Thought Leaders Seminar, videorecordings
Online 4. A Startup in Harmony [Entire Talk] [2012]
- Wang, Ge, 1977- (Speaker)
- Stanford (Calif.), October 3, 2012
- Description
- Video — 1 digital video file
- Summary
-
Co-Founders Wang, Ge and Smith, Jeff share how their passion for music and technology discovered its full voice in the founding of Smule, whose applications seek to liberate the musician in everyone. Wang emphasizes how technology should enable human connection and reaction, and Smith shares insights on the mobile space and the importance of product focus.
- Collection
- Stanford Technology Ventures Program, Entrepreneurial Thought Leaders Seminar, videorecordings
Online 5. Cell Phone Orchestra [2009]
- Stanford University. News and Publications Service (Producer)
- Stanford (Calif.), February 11, 2009
- Description
- Video — 1 MiniDV tape
- Summary
-
The Stanford Mobile Phone Orchestra, led by Ge Wang, an assistant professor of music at Stanford's Center for Computer Research in Music and Acoustics (CCRMA).
- Collection
- Stanford University, News and Publication Service, audiovisual recordings, 1936-2011 (inclusive)
Online 6. iPhone Application Development, CS193, 2009-06-01 [2009]
- Stanford University. News and Publications Service (Producer)
- Stanford (Calif.), June 1, 2009
- Description
- Video — 1 MiniDV tape
- Collection
- Stanford University, News and Publication Service, audiovisual recordings, 1936-2011 (inclusive)
- Shelter Island : Manning, [2015]
- Description
- Book — xxix, 309 pages : illustrations ; 23 cm
- Summary
-
- Introduction to programming in ChucK. Basics: sound, waves, and ChucK programming ; Libraries: ChucK's built-in tools ; Arrays: arranging and accessing your compositional data ; Sound files and sound manipulation ; Functions: making your own tools
- Now it gets really interesting! Unit generators: ChucK objects for sound synthesis and processing ; Synthesis ToolKit instruments ; Multithreading and concurrency: running many programs at once ; Objects and classes: making your own ChucK power tools ; Events: signaling between shreds and syncing to the outside world ; Integrating with other systems via MIDI, OSC, serial, and more.
(source: Nielsen Book Data)
Music Library, SAL3 (off-campus storage)
Music Library | Status |
---|---|
Stacks | |
ML74.4 .C48 P76 2015 | Unknown CHECKEDOUT |
SAL3 (off-campus storage) | Status |
---|---|
Stacks | Request (opens in new tab) |
ML74.4 .C48 P76 2015 | Available |
Online 8. Affective analysis and synthesis of laughter [electronic resource] [2014]
- Oh, Jieun.
- 2014.
- Description
- Book — 1 online resource.
- Summary
-
Laughter is a universal human response to emotional stimuli. Though the production mechanism of laughter may seem crude when compared to other modes of vocalization such as speech and singing, the resulting auditory signal is nonetheless expressive. That is, laughter triggered by different social and emotional contexts is characterized by distinctiveness in auditory features that implicate certain state and attitude of the laughing person. By implementing prototypes for interactive laughter synthesis and conducting crowdsourced experiments on the synthesized laughter stimuli, this dissertation investigates acoustic features of laughter expressions, and how they may give rise to emotional meaning. The first part of the dissertation (Chapter 3) provides a new approach for interactive laughter synthesis that prioritizes expressiveness. Our synthesis model, with a reference implementation in the ChucK programming language, offers three levels of representation: the transcription mode requires specifying precise values of all control parameters, the instrument mode allows users to freely trigger and control laughter within the instrument's capacities, and the agent mode semi-automatically generates laughter according to its predefined characteristic tendency. Modified versions of this model has served as a stimulus generator for conducting perception experiments, as well as an instrument for the laptop orchestra. The second part of the dissertation (Chapter 4) describes a series of experiments conducted to understand (1) how acoustic features affect listeners' perception of emotions in synthesized laughter, and (2) the extent to which this observed relationships between features and emotions are laughter-specific. To explore the first question, a few chosen features are varied systematically to measure their impact on the perceived intensity and valence of emotions. To explore the second question, we intentionally eliminate timbral and pitch-contour cues that are essential to our recognition of laughter in order to gauge the extent to which our acoustic features are specific to the domain of laughter. As a related contribution, we describe our attempts to characterize features of auditory signal that can be used to distinguish laughter from speech (Chapter 5). While the corpus used to conduct this work does not provide annotations about the emotional qualities of laughter, and instead simply labels a given frame as either laughter, filler (such as 'uh', 'like', or 'er'), or garbage (including speech without laughter), this portion of research nonetheless serves as a starting point for applying our insights from Chapter 3 and Chapter 4 to a more practical problem involving laughter classification using real-life data. By focusing on the affective dimensions of laughter, this work complements prior works on laughter synthesis that have primarily emphasized the acceptability criteria. Moreover, by collecting listeners' response to synthesized laughter stimuli, this work attempts to establish a causal link between acoustic features and emotional meaning that is difficult to achieve when using real laughter sounds. The collection of research presented in this dissertation is intended to offer novel tools and framework for exploring many more unsolved questions about how humans communicate through laughter.
- Also online at
-
Special Collections
Special Collections | Status |
---|---|
University Archives | Request via Aeon (opens in new tab) |
3781 2014 O | In-library use |
Online 9. Diffusion-based music analysis [electronic resource] : a non-linear approach for visualization and interpretation of the geometry of music [2011]
- Sell, Gregory Kennedy.
- 2010, c2011.
- Description
- Book — 1 online resource.
- Summary
-
Diffusion mapping is a non-linear data analysis method based off a model of the data as a states in a random walk. Through this approach, the global structure of the data is built up from local connectivity rather than pure distance. This diffusion-based approach is advantageous because, by using only local connectivity, it is still robust and meaningful in high dimensional spaces, unlike Euclidean distance, without requiring any assumptions about the structure of the data. Also, the diffusion mapping format leads directly into meaningful low-dimensional spaces for visualization of the data's structure. I will examine the effectiveness of diffusion mapping as a tool for analysis and visualization of music theory and, through these demonstrations, make an argument for its vast potential in the field. Diffusion has never been applied to music at this level before, nor has it been used at any other field for an analysis on a comparable level to music theory, but it will be shown that the approach is not only capable of organizing and visualizing music, but also, through those organizations and visualizations, communicating the underlying music theory used in creating the data sets. Example applications include demonstrations in the geometric representations of intervals, organizing data sets based on key and meter, and visualization of musical excerpts as trajectories in a diffusion-derived space.
- Also online at
-
Special Collections
Special Collections | Status |
---|---|
University Archives | Request via Aeon (opens in new tab) |
3781 2010 S | In-library use |
Online 10. Collaborative musicking on the web [electronic resource] [2016]
- Choi, Hongchan.
- 2016.
- Description
- Book — 1 online resource.
- Summary
-
The Web has evolved far beyond its original function as the vehicle of information sharing through hypertext, but the spirit of collaboration remains central to the Web. With the arrival of HTML5 and new browsers capable of streaming audio and complex graphics, new and unprecedented opportunities have arisen. The advent of Web Audio API in 2010 took real-time audio processing within web browser to a new level, giving birth to a new breed of interactive, visually rich, and networked music software. This thesis investigates the potential of web music technology within the context of collaborative musicking. I adopt the verb `musicking', coined by the late Christopher Small, in the title, to underscore the nature of music as an active process rather than a static object. The dissertation commences with a concise review of the intersections and convergence of the Web and music technology. Following this historical review of web music technology, a theoretical framework is proposed. The subsequent chapters describe technical efforts to implement this theoretical basis and empirical experimentation through a series of `case studies'. We conclude with a critical evaluation of the work and a look toward the future. The work offers a novel model for classification of collaborative computer music production (Chapter 2), and an innovative software framework that facilitates the development of web-based music applications (Chapter 3) that are illustrated in case studies (Chapter 4). Unlike many written dissertations, this thesis documents a living and organic project that is best studied directly engaging with the environment described within. The reader is encouraged to pursue the thesis while directly interacting with this project on the Web. All code examples and operational prototypes discussed in the text are available at \url{https://ccrma.stanford.edu/~hongchan/WAAX} keeping in mind that the very nature of this project is one that enables and encourages collaborative creativity - indeed, `musicking' - on an unprecedented massive scale.
- Also online at
-
Special Collections
Special Collections | Status |
---|---|
University Archives | Request via Aeon (opens in new tab) |
3781 2016 C | In-library use |
Online 11. Perceptually coherent mapping schemata for virtual space and musical method [electronic resource] [2014]
- Hamilton, Robert.
- 2014.
- Description
- Book — 1 online resource.
- Summary
-
Our expectations for how visual interactions sound are shaped in part by our own learned understandings of and experiences with objects and actions, and in part by the extent to which we perceive coherence between gestures which can be identified as "sound-generating" and their resultant sonic events. Even as advances in technology have made the creation of dynamic computer-generated audio-visual spaces not only possible but increasingly common, composers and sound designers have sought tighter integration between action and gesture in the visual domain and their accompanying sound and musical events in the auditory domain. Procedural audio and music, or the use of real-time data generated by in-game actors and their interactions in virtual space to dynamically generate sound and music, allows sound artists to create tight couplings across the visual and auditory modalities. Such procedural approaches however become problematic when players or observers are presented with audio-visual events within novel environments wherein their own prior knowledge and learned expectations about sound, image and interactivity are no longer valid. With the use of procedurally-generated music and audio in interactive systems becoming more prevalent, composers, sound-designers and programmers are faced with an increasing need to establish low-level understandings of the crossmodal correlations between visual gesture and sonified musical result both to convey artistic intent as well as to present communicative sonifications of visual action and event. For composers and designers attempting to build evocative and expressive procedural sound and music systems, when the local realities of any given virtual space are completely flexible and malleable, there exist few to no dependable locale-specific models upon which to base their choices of mapping schemata. This research focuses jointly on the creative and technical concerns necessary to build procedurally-generated crossmodal musical interactions, as well as on the perceptual issues surrounding specific mapping schemata linking interactions with sound and music. A software solution and methodology are presented to facillitate the mapping of parameters of action, motion and gesture from virtual space to sound-generating process, allowing composers and designers to repurpose real-time data as drivers for compositional and sound-related process. Creative and technical examples drawn from a series of multimodal musical experiences are presented and discussed, exploring a variety of potential mapping schemata as well as the inner workings of the presented codebases. To assess the perceived coherence between motion and gesture in the visual modality and generated sound and musical events in the auditory modality, this research also details a user-study measuring the impact of audio-visual crossmodal correspondences between low-level attributes of motion and sound. Subjects taking part in a controlled study were presented with multimodal examples of musically sonified motion in a pairwise comparison task and asked to rate the perceived fit between visual and auditory events. Each example was defined as a composite set of simple motion and sound attributes. Study results were analyzed using the Bradley-Terry statistical model, effectively calculating the relative contribution of each crossmodal attribute within each attribute pairing to the perceived coherence or 'fit' between audio and visual data. The statistical analysis of correlated motion/sound mappings and their relative contributions to the perceptual coherence of audio-visual interactions lay the groundwork towards the establishment of predictive models linking attributes of sound and motion to perceived fit.
- Also online at
-
Special Collections
Special Collections | Status |
---|---|
University Archives | Request via Aeon (opens in new tab) |
3781 2014 H | In-library use |
Online 12. Tool-building for amateur creativity in virtual reality [2022]
- Atherton, Lawrence John, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
This thesis studies amateur creativity in the context of building virtual reality experiences. Access to creative self-expression is an important part of flourishing as a human, and new technologies often dramatically affect how we live our lives. So, it is the underlying mission of this thesis to ensure that after its period of rapid commercialization, virtual reality can still be used creatively by everyday people. I introduce the term "folk design" to mean design pursued by amateurs for their own enjoyment and for the connection to their community that the process brings. Supporting folk design in virtual reality naturally leads to more specific research questions: how can virtual reality experiences enrich someone's life? How can we create tools that are approachable and enjoyable to use from within virtual reality? Can we better understand the needs of amateur creators within their social context? The goal of this thesis is to develop potential values and strategies that might influence future experiences and tools for virtual reality. In order to do so, I use a Research through Design methodology to develop design theory. Design theory is not universal; it is specific to the context of people, places, and things where it was developed. It is often said that the goal of design research is to produce guidance that sometimes brings about effects that are aligned with its underlying values. It is left to future work to see what other situations the design theory developed here can be useful in. Nevertheless, it is strongly shown that it is useful in the context of amateur creation of virtual reality environments. The research process of this thesis began with a consideration of musical creativity in virtual reality. Armed with values on living well and the benefits of making music, I first analyzed existing commercial experiences focused on music in virtual reality. Next, I explored new musical interactions and experiences that would encourage musical creativity in ways not addressed by the commercial experiences. After building a feature-length experience, I next considered how to build tools that would enable amateurs to create similar experiences, but from within virtual reality and without specialized technical knowledge. After experimenting with a sculptural block-based programming language that enabled users to control their environment with sound, I broadened my focus to every aspect of the virtual environment. I built tools powered by interactive machine learning for creating landscapes, music, and virtual creatures (requiring both sound and animation), all by providing examples of the desired appearance or behavior. Finally, I developed insights on how to support amateur creators in social environments. I first looked at a context that succeeded in this area, observing public making workshops in museums. When translating those insights into VR, I discovered that a major area of difference between physical and virtual reality was creators' comfort in communication. So, I developed communication techniques for indirect, whimsical communication that enabled creators to express their ideas, show their personality, and develop a shared language. Ultimately, the tools I developed enabled real-world amateur creators to folk design in virtual reality. With this research, I contributed three kinds of knowledge. From analyzing and building experiences in virtual reality, I developed theoretical perspectives on how to infuse these experiences with the values of human flourishing. Of particular importance is doing vs. being, the balance of focuses on action, agency, reflection, and calm. From building tools for the creation of virtual environments, I proposed reusable strategies for using interactive machine learning in creative tools. One such strategy, chained mappings, connects several machine learning models together so that they can develop complex relationships from simple inputs, with creators able to provide semantically meaningful examples at multiple levels. And, from my observations of amateur creativity in museums, I articulated insights on supporting social creativity. Overarching these insights is ad hoc social connection, the short-lived yet strong bonds between unfamiliar creators working near each other. My tools for indirect, whimsical communication in virtual reality then led me to articulate seven areas of consideration that should be covered by any such set of tools, such as influence, personalization, and expression. Taken together, these contributions form a design framework for supporting amateur creativity in virtual reality. Each contribution stands alone and may be useful to guide any project incorporating a focus on human flourishing through creativity. As a whole, the framework empowers everyday people to express themselves, to play, and to build community. It also helps knit together a growing body of academic research on supporting creativity not for the end result, but for the joy of the process itself
- Also online at
-
- Fischer, Michael Henry, author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
Virtual assistants give end-users the capability to access their devices and web data using hundreds of thousands of predefined skills. Nonetheless, there is still a long-tail of personal digital tasks that people want to automate. This dissertation explores how end-users can define useful personalized skills without learning any formal programming languages. We empower an end user to create a web-based skill by demonstrating their skill in a web browser using natural language, their mouse, and keyboard. Our tool is the first program-by-demonstration system that produces programs with control constructs. The system gives the user an easy-to-learn multimodal interface and generates code in a formal programming language which supports parameterization, function invocation, conditionals, and iterative execution. We show that a virtual assistant skill can greatly benefit from having a graphical interface as users can monitor multiple queries simultaneously, re-run skills easily, and adjust settings using multiple modes of interaction. We developed a system that automatically translates a user's voice command into a reusable skill with a graphical user interface. Unlike the formulaic interfaces generated by previous work, we generate interfaces that are interesting and diverse by using a novel template-based approach. To improve the aesthetics of graphical user interfaces we use a technique called style transfer. We show that the previous formulation of style transfer cannot retain structure in an image, which causes the output result to lack definition and legibility, and renders restyled interfaces not usable. Our purely neural solution captures structure by the uncentered cross-covariance between features across different layers of a convolutional neural network. By minimizing the squared error in the structure between the style and output images, our technology retains structure while generating results with texture in the background, shadow and contrast in the borders, consistency of design across edges, and an overall cohesiveness to the design. In summary, our system enables end-users to create web-based skills with automatically generated graphical user interfaces
- Also online at
-
Online 14. Philosophy Talk. Comforting Conversations - Part 2 [2020]
- Briggs, R. A., 1982- (Speaker)
- KALW (Radio station : San Francisco, Calif.) : California, May 24, 2020
- Description
- Sound recording — 1 audio file
- Summary
-
In troubling, uncertain times, the arts and humanities are more important than ever. Engaging with works of literature can provide both much needed insight into our current struggles and a sense of perspective in a crisis. In what ways do novels or plays help us come to terms with human suffering? Can fictional narratives about past pandemics shed light on our current situation? And how can storytelling or music help bring us together in isolation? Josh and Ray converse with a range of Stanford faculty members about how philosophy, music, drama, and literature can provide comfort, connection, and a sense of community.
- Collection
- Philosophy Talk, 2002-
Online 15. The hybrid mobile instrument : recoupling the haptic, the physical, and the virtual [2018]
- Michon, Romain Pierre Denis, author.
- [Stanford, California] : [Stanford University], 2018.
- Description
- Book — 1 online resource.
- Summary
-
The decoupling of the "controller" from the "synthesizer" is one of the defining characteristic of digital musical instruments (DMIs). While this allows for much flexibility, this "demutualization" (as Perry Cook termed) sometimes results in a loss of intimacy between the performer and the instrument. In this thesis, we introduce a framework to craft "mutualized" DMIs by leveraging the concepts of augmented mobile device, hybrid instrument design, and skill transfer from existing performer technique. Augmented mobile instruments combine commodity mobile devices with passive and active elements that can take part in the production of sound (e.g., resonators, exciter, etc.), while adding new affordances to the device and changing its form and overall aesthetics. Screen interfaces can be designed to facilitate skill transfer, accelerating the learning and the mastery of such instruments. Hybrid instrument design mutualizes physical and "physically-informed" virtual elements, taking advantage of recent progress in physical modeling and digital fabrication. This design ethos allows physical/acoustical elements to be substituted with virtual/digital ones and vice versa (as long as it is physically possible). A set of tools to design hybrid mobile instruments is introduced and evaluated. Overall, we demonstrate how this approach can help digital luthiers to think about DMI design "as a whole" in order to create mutualized instruments. Through a series of case studies, we discuss aesthetic and design implications when making such hybrid instruments.
- Also online at
-
Special Collections
Special Collections | Status |
---|---|
University Archives | Request via Aeon (opens in new tab) |
3781 2018 M | In-library use |
Articles+
Journal articles, e-books, & other e-resources
Guides
Course- and topic-based guides to collections, tools, and services.