Min Xin

Phidget Project Video

Assignment 2 Proposal (PhysX)

There are many techniques for giving virtual entities a sense of physical presence. One of the most obvious methods is to provide a realistic physical simulation for interacting with them. Most modern games use physics engines to embed physical properties into the virtual entities and the environment in which they exist. Complex calculations are then made based on the set parameters to produce realistic visual feedback for the simulation. This can be very useful for my final project as I will attempt to merge physical interaction with virtual entities. For assignment 2, I propose to learn how to use the popular physics engine, PhysX, used in many commercial games and is free for non-commercial purposes. I will mainly use it for collision detection and response in the final project. An example application is producing realistic visual feedback when users bump into virtual entities with a physical artefact tagged with a mixed reality marker much like in the Kobito project. PhysX also have other useful features for simulating physical materials such as fluids and cloth. My assignment 2 will be an exploration of these features and how they can be incorporated into my final project.

More information on PhysX

Final Project Proposal

Introduction

Children have a wild imagination untamed by societal conventions and restrictions. They often play in their own make believe world with imaginary characters that they have constructed and conceptualized. From my own childhood, I recall coordinating epic naval battles in a neighbourhood water puddle with the foam ships that I crafted. This form of pretend play helps children nurture their creativity as they mentally augment physical settings and props. Incorporating virtual entities has the potential to enhance this playing experience, but interacting with them often requires separation from the existing physical environment. Work has been done to give virtual entities a physical presence. The Kobito [1] project visualizes virtual characters pushing a physical block using a mixed reality viewing window and provides physical feedback through corresponding movements of a hidden magnet. The Virtual Raft [2] project uses a land and water metaphor to define interaction with virtual characters. It involves several static computer monitors or “islands” where virtual characters can dwell and tablet PCs or “rafts” in which the virtual characters can be transported from one “island” to another.

In the children’s story, “Harold and the Purple Crayon”, a magical crayon drew entities which instantly became real. Following this concept, I propose an interactive playing environment which will allow children to quickly draw 3D virtual entities and incorporate them in physical settings using techniques from Kobito and Virtual Raft. Drawing is a quick and flexible method for creating content, and many children enjoy this creative activity. By bring drawings to life as interactive virtual entities in the physical environment, children are encouraged to more freely express their creative visions, and their imagination can be more vividly realized.

Sketch-based Input

Drawing or sketching is usually a creative process performed in 2D. Often its intent is to generate static representation of concepts. With the help of computer graphics, 2D strokes from drawings or sketches can be used to infer 3D models. Sketch-based systems developed by Igarashi et al. (Teddy) [3] and Cherlin et al. [4] allow users to intuitively create freeform 3D models using only a few strokes. Although the models created have more of a physical presence than 2D drawings, they are still static objects unless interactive deformation techniques are applied. I will not be exploring Igarashi and Cherlin’s techniques for this project but their advantage is the flexibility of the generated content.

In one of the scenes of “Harold and the Purple Crayon”, Harold accidentally draws wavy lines as his hands shakes from his fear of a nearby monster. Although not by intent, these wavy lines become interpreted as water. This example of sketch recognition is another technique of generating 3D content. In the proposed system, I will gather a library of pre animated 3D models. Children can then add these models into their playing environment by simply sketching a 2D drawing of the model and the system will recognize and replace the drawing with the corresponding animated model. Although this technique is not as flexible as generating freeform 3D content, it does give a better sense of bring a static drawing to life. The 2D drawings will be mostly created on a mobile tablet PC which can be used to transport the generated 3D models.

Interacting with Virtual Entities

An easy way to give virtual entities a sense of physical presence is to relate them to the physical environment. Using techniques explored in the Virtual Raft [2] project, I intend on using monitor displays in different locations as homes where virtual entities can dwell. Children will be able to create 3D entities on the tablet PC and carry them using the device to different homes and visualize them on the corresponding display. The transfer of the entities from the tablet PC to the different devices will be accomplished using a database and phidgets. The database will keep track of all entities generated and their current physical location, and phidgets such as IR sensors or RFID readers will be used to detect the physical act of transfer from the tablet PC to the different displays.

One problem with the above approach is that virtual entities cannot be placed in arbitrary locations in the physical environment which limits their interactivity. Mixed reality using marker-based detection can solve this problem. Virtual entities can be superimposed on top of physical markers which give them direct reference in the physical environment. To visualize these arbitrarily located virtual entities, I plan on using the large wall-mounted display in the home space as a special location or stage where virtual entities superimposed on top of markers can be visualized and interacted with. The setup will be similar to the PlayStation Eye Toy where a front facing camera will be positioned on the display overlooking the table surface in front. By manipulating physical artifacts with markers attached, children will be able to obtain visual feedback of the corresponding 3D entity on the large display. The interaction should be similar to orchestrating a puppet show where the display is the stage and the virtual entities are the puppets. The background of the display can also change to fit the type of playing environment desired. A deck of playing cards (RFID cards) can be used to switch the setting, or the another phidget such as the light sensor can be used to gather input from the physical environment and change the virtual setting accordingly. For example if it’s dark, then the setting can automatically change to a night scene.

Realistic physical simulations can also help to generate a better sense of physical presence for the virtual entities. To simplify the implementation, an existing physics engine such as PhysX [5] can be used. This allows virtual entities to respond realistically to physical interaction. For example, as virtual entities are transported on the tablet PC they can physically react to motions of the tablet PC detected by the accelerometer phidget, or when virtual entities collide, a realistic collision can be experienced.

List of Deliverables

  1. Sketch recognition of 2D drawings (also used for Faramarz’s class project)
  2. Distributed system architecture to enable transfer of virtual entities
  3. Physically transporting virtual entities to different displays (IR sensor, RFID reader, accelerometer)
  4. Interactive mixed reality play stage (home space large display)
  5. Superimpose virtual entities on top of mixed reality markers
  6. Changing play setting (RFID cards or dynamically using light sensor)
  7. Implementing physics engine to allow for realistic physical simulation of virtual entities (PhysX; maybe used as assignment 2)

References

  1. Kobito: Virtual Brownies, online: http://rogiken.org/vr/english.html
  2. Tomlinson, B., Yau, M., O’Connell, J., Williams, K., and Yamaoka, S. “The Virtual Raft Project: A Mobile Interface for Interacting with Communities of Autonomous Characters”, Conference Abstracts and Applications, ACM Conference On Human Factors In Computing Systems (CHI 2005), 2005.
  3. Igarashi, T., Matsuoka, S., and Tanaka, H., "Teddy: A Sketching Interface for 3D Freeform Design", ACM SIGGRAPH'99, pages 409-416, 1999.
  4. Cherlin, J., Samavati, F., Sousa, M., and Jorge, J., “Sketch-based Modeling with Few Strokes”, 21st Spring Conference on Computer Graphics, 2005.
  5. PhysX, online: http://www.ageia.com/

Seminar Proposal

Pen-based interfaces are currently used for numerous computing platforms. Applications such as digital whiteboards, PDAs, tablet PCs, the new ultra mobile PCs, and even the Nintendo DS mobile gaming console depend largely on this interaction technique for input. What is significant is that these computing platforms are all related to the concept of ubiquitous computing. For mobile platforms like PDAs, a pen-based interface effectively replaces the mouse and keyboard and allows users to point, write, and draw. For collaborative design applications like the whiteboard, a pen-based interface provides an intuitive and expressive interaction technique for communication.

Pen-based interfaces attempt to mimic the traditional paper and pencil which is fundamentally different from the keyboard and mouse. Computers are originally designed to perform very formal work, and formal interaction requires the precision provided by typing and moving a cursor on the screen. Writing and drawing are more expressive interactive techniques which are capable of supporting informal interaction. Designers often prefer the use of paper and pencil over computers for early conceptual design because paper and pencil support ambiguity and allow for iterative refinement of ideas [1]. The problem encounter by many pen-based interfaces is the mapping of expressive and ambiguous input to strict parameters expected by computer programs. Research such as writing and sketch recognition attempts to deal with these issues.

I propose to present a seminar exploring the topic of pen-based interfaces in relation to ubiquitous computing. I will provide motivation for the use of pen-based interfaces, give an overview of existing pen-based interaction techniques, and talk about the applications of pen-based interfaces for ubiquitous computing.

  1. Gross, M., and Do, E., “Ambiguous Interactions: a Paper-like Interface for Creative Design”, UIST, pages 183-192, 1996.

Some Pen-based Interaction Papers

Preliminary Proposal

Children have a wild imagination untamed by societal conventions and restrictions. They often play in their own make believe world with imaginary characters that they have constructed and conceptualized. From my own childhood, I recall coordinating epic naval battles in a neighbourhood water puddle with the foam ships that I crafted. This form of pretend play helps children nurture their creativity as they mentally augment physical settings and props. Incorporating virtual entities has the potential to enhance this playing experience, but interacting with them often requires separation from the existing physical environment. Work has been done to give virtual entities a physical presence. The Kobito [1] project visualizes virtual characters pushing a physical block using a mixed reality viewing window and provides physical feedback through corresponding movements of a hidden magnet. The Virtual Raft [2] project uses a land and water metaphor to define interaction with virtual characters. It involves several static computer monitors or “islands” where virtual characters can dwell and tablet PCs or “rafts” in which the virtual characters can be transported from one “island” to another.

In the children’s story, “Harold and the Purple Crayon”, a magical crayon drew things which instantly became real. Following this concept, I propose an interactive playing environment which will allow children to quickly draw 3D virtual entities and incorporate them in physical settings using techniques from Kobito and Virtual Raft. Drawing is a quick and flexible method for creating content, and many children enjoy this creative activity. By bring drawings to life as interactive virtual entities in the physical environment, children are encouraged to more freely express their creative visions, and their imagination can be more vividly realized. To implement 3D content creation, intuitive sketch based techniques such those developed by Takeo Igarashi (Teddy) [3] and Joseph Cherlin [4] can be used. Using sketch recognition to match a child’s drawing to a pre-animated model is also a possibility. The complex part of the project is how the created drawing will be made alive and become part of the physical playing environment. This can be achieved using a combination of techniques. First, as 3D drawings are generated on the tablet PC, children will be able to interact with them by tilting the device (motion detected by accelerometers). For example, if a character is drawn and becomes alive, then tilting the tablet PC will make the character tip to the corresponding side. This gives an initial indication that the virtual entities possess some physical characteristics. Second, static monitors can be used as docking points for the created virtual entities, and children can carry their creations to one of these monitors using the tablet PC. Different monitors can represent different settings. One idea is to have a “digital mirror” where the monitor displays video from a web cam mounted on top, and the virtual entity will appear inside the “digital mirror” (think Snow White). Similar to the Virtual Raft, the mounted web cam can also be used to detect motion which can allow the virtual entities displayed in the monitor to react to the children’s movements. Third, virtual entities can also be visualized in different locations using mixed reality markers and a mixed reality viewing device. Finally, phigets can be used to generate physical embodiments of the virtual entities through the use of lights, motion, and sound. By combining an intuitive drawing process with various techniques of interacting with virtual entities in physical settings, I hope to create an environment where children can spawn ideas, draw them, and play with them.

  1. Kobito: Virtual Brownies, online: http://rogiken.org/vr/english.html
  2. Tomlinson, B., Yau, M., O’Connell, J., Williams, K., and Yamaoka, S. “The Virtual Raft Project: A Mobile Interface for Interacting with Communities of Autonomous Characters”, Conference Abstracts and Applications, ACM Conference On Human Factors In Computing Systems (CHI 2005), 2005.
  3. Igarashi, T., Matsuoka, S., and Tanaka, H., "Teddy: A Sketching Interface for 3D Freeform Design", ACM SIGGRAPH'99, pages 409-416, 1999.
  4. Cherlin, J., Samavati, F., Sousa, M., and Jorge, J., “Sketch-based Modeling with Few Strokes”, 21st Spring Conference on Computer Graphics, 2005.

Videos