CPSC 681 - Project Examples

back to CPSC 681

This list will be filled in over the term

Twinned Media Space

Saul Greenberg isusing a video/audio-based media space that connects his home and work office. The idea is that when Saul is working at home, students at the University can see if he is 'around' and can come into his work office and talk to him as needed. This study will describe experiences using this space over time from the perspective of Saul, the people who use this space, and the other people around the lab who do not use it. Ideally, a later study will replicate it with another person. See Gutwin's The Magic Window paper (Group 2007) to get an idea of how such a study / analysis can be done. Likely method: diary, interviews, self-reports, scenarios.

Study replication

Take a previously reported study (e.g., in ACM CHI) and replicate and/or extend it. The study should be interesting and perhaps have controversial findings. Your job is to compare your findings with those of the original author.

Video Coding Software

Petra Isenberg created PeCoTo software that helps people code video for events. She has several suggestions for improving this software, and could be interested in working with a student (over distance), where the student will do the implementation and detailed design. The student would begin by reviewing existing video coding systems for features, and create a document suggesting possible design changes. Some things Petra suggests are:

  1. An interface to tag items on the video itself. E.g. Imagine a scenario where you filmed tabletop activity from the top. Sometimes you capture where someone moved an element on the display, when they have explicitly handed something off or where they have their hands at certain points in time. It would be great to have certain tags on the video that you can place or move around. The tags positions would be tracked in a log and then later a visual and statistical output can be generated. I imagine some sort of chips that you can place on the video and manually (for now) move at certain points in time when the video has changed sufficiently. The chips positions could be saved according to their pixel coordinates.
  2. Some statistical or visual output of the coded data. Tags + timestamps could be analyzed according to simple stats, like sums, occurrence frequency over time, etc. This could also be visualized and returned as a visual summary of the coded data.
  3. Snapshots. Sometimes something interesting occurs in the video and you want it marked with a bookmark and a snapshot taken. This one is simple and can be easily integrated.