We have an on-going interest in turning human motion into sound for feedback to athletes.
In Situ Motion Capture of High-Speed Athletics
The advent of the Kinect depth imager has opened the door to
motion capture applications that would have been
much more costly with previous technologies. In part, the Kinect achieves this
by focusing on a very specific application domain,
thus narrowing the requirement for the motion capture system. Specifically,
Kinect motion capture works best within a small physical space while
the camera is stationary. We seek to extend Kinect motion capture for
use in athletic training -- speed skating in particular --
by placing the Kinect on a mobile, robotic
platform to capture motion in situ.
Athletes move over large distances, so the mobile platform
addresses the limited viewing area of the Kinect. As the platform
moves, we must also account for the now dynamic background against which
the athlete performs. The result is a novel, visually-guided robotic platform
that follows athletes, allowing us to
capture motion and images that would not be possible with a treadmill.
We wanted an audience at a sporting event to be able to interact with Swarm Art. It is not feasible to track or recognize gestures in this scenario. This is how we solved the problem.
Video games is a potential application of motion swarm interaction.
This example shows Poetry Pong, a pong game that produced Markov Model poems (rained with Coleridge in this case). A motion swarm particle acts as a start button in the upper left. Another particle provides a slider for the human-controlled player at the top.
Cooperative Robots for Surveillance
We're building a fleet of small surveillance robots built on a Superdroid platform. The goal is to test and demonstrate a multi-agent system for harbor surveillance and security. The fleet is now operational. The demo video shows four of our ten Superdroids operating simultaneously.
This video shows Mark 0 following some way points around the courtyard outside the ICT building at U of Calgary, 16 July 2009.
Interactive Art and Swarm Art
Jerry Hushlak, Christian Jacob, and I have collaborated to create a number of interactive art installations. We are still working to create more. The image on the right is the cover of the June 2007 edition of Leonardo featuring one of us playing with our first Swarm Art installation in the lab shortly before moving it to the Nickle Art Gallery.
A video information server is a device that provides information about scene. The information is either video images, or data extracted from video images. Information extracted from the video images can include descriptions of moving objects, their trajectories, and camera properties. Our servers use CaML (Camera Markup Language), an XML-based language for interaction with video cameras. CaML shares some features with MPEG-7, but it is simpler, designed for bi-direction interaction with a camera, and tailored for use in real-time systems that operate as events occur. The client acquires data from a CaML server, and uses the data in any way that is desired.
Detection of concealed objects, such as explosives or illegal drugs, using X-rays is confounded when the object is formed into a thin sheet. We have explored the potential of binary image restoration to detect such thin objects in two- and three-dimensional images. The use of a weighted mean-square error estimate to perform the restoration optimizes the restoration to place emphasis on the infrequent, but significant local structure of the thin objects. Experimental results show the restoration of thin lines and curves in two-dimensional data, and thin sheets in three-dimensional tomographic data.
We have developed a novel vision system that can recognize people by the way they walk. The system computes optical flow for an image sequence of a person walking, and then characterizes the shape of the motion with a set of sinusoidally-varying scalars. Feature vectors composed of the phases of the sinusoids are able to discriminate among people.
A. Godbout and J. E. Boyd,Rhythmic sonic feedback for speed skating by real-time movement synchronization, International Journal of Computer Science in Sport, December 2012.
C. Jacob, G. Hushlak, J. Boyd, P. Nuytten, M. Sayles, and M.Pilat, SwarmArt: interactive art from swarm intelligence, Leonardo, Vol. 40, No. 3, June, 2007.
J. Boyd, Synchronization of oscillations for machine perception of gaits, Computer Vision and Image Understanding, Vol. 96, 2004, p 35-59.
J. E. Boyd and J. J. Little, Silhouette-based gait recognition, in Encyclopedia of Biometrics, S. Z. Li (ed.), Springer, p646-652, 2009.
J. Boyd and J. Little, Biometric Gait Recognition, in Advanced Studies in Biometrics: Summer School on Biometrics , Alghero Italy, June 2-6, 2003. Revised Selected Lectures and Papers, Editors:Massimo Tistarelli, Josef Bigun, Enrico Grosso, Lecture Notes in Computer Science, Vol. 3161/2005, Springer, 2005, p 19-42.
J. Boyd and J. Little, Shape of Motion and the Perception of Human Gaits, Proceedings of IEEE Workshop on Empirical Evaluation Methods in Computer Vision, CVPR 98, Santa Barbara, CA, June 1998, p155-171.
Conference Proceedings
Andrew Godbout, Iulius AT Popa, and Jeffrey E Boyd. (2018), Emotional Musification, Audio Mostly 2018, Wrexham, United Kingdom, September 2018, pages 6:1--6:6.
A. Godbout and J. E. Boyd, Audio Visual Synchronization of Rhythm, Proceedings of International Conference on 3D Vision (3DV 2015), Lyon, France, October 2015.
R. K. Mohammed-Amin, S. von Mammen and J. E. Boyd, ARCS Architectural Chameleon Skin, Proceedings of the 31st eCAADe (Education and research in Computer Aided Architectural Design in Europe) Conference, Delft, Netherlands, 18-20 September, 2013, p467-475.
O. Kryzhanivska, G. Hushlak, and J. E. Boyd, Touching digital art: interactive haptic mixed reality sculptures, Hot 3D 2013 (in conjunction with International Conference on Multimedia Expo), San Jose, CA, July 2013. (Short position paper and presentation)
O. Kryzhanivska and J. E. Boyd, Body topography: simulating human form, IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 2012, November 2012, Atlanta, GA.
L. Mor, R. Levy and J. E. Boyd, Augmented reality for virtual renovation, Personalized Access to Cultural Heritage (PATCH) 2012, Nara, Japan, October 2012.
R. Mohammed-Amin, R. Levy and J. E. Boyd, Mobile augmented reality for interpretation of archaeological sites, Personalized Access to Cultural Heritage (PATCH) 2012, Nara, Japan, October 2012.
I.P.T. Weerasinghe, J.Y. Ruwanpura, J.E. Boyd, A. Habib, Image processing based automated real-time construction worker tracking system. Proceedings of the 3rd International/9th Construction Specialty Conference - CSCE, Ottawa, Ontario, June 2011, CN208 - 1-10.
J. E. Young, E. Sharlin, and J. E. Boyd, The use of Haar-like features in bubblegrams: a mixed reality human-robot interaction technique, Proceeding of IEEE CASE Õ06, Shanghai, China, October 7-10, 2006.
J. Boyd, M. Sayles, L. Olsen, P. Tarjan, Content Description Servers for Networked Video Surveillance, in proceedings of IEEE International Conference on Information and Technology: Coding and Computing, ITCC 2004 April 5-7, 2004, Las Vegas, Nevada.
J. Little and J. Boyd, Describing Motion for Recognition, Proceedings of the International Symposium and Computer Vision 1995, Coral Gables, FL, 21-23 Nov 1995.