Tuesday, September 20, 2011

Paper Reading # 9: Jogging over a Distance between Europe and Australia

Reference Information: Jogging over a Distance between Europe and Australia by Florian Mueller, Franke Vetere, Martin Gibbs, Darren Edge, Stefan Agamanolis, and Jennifer Sheridan.
UIST 2010 New York, New York. 

Author Bios: Florian Mueller is known as a "World Expert in Exertion Games." Mueller earned a Bachelor of Multimedia from Griffith University, a Master in Media Arts and Sciences from MIT's Media Lab, and a PhD in Interaction Design on Exertion Games from the University of Melbourne. He is currently researching exertion games at Stanford University as a Fulbright Visiting Scholar.

Franke Vetere works in the Interaction Design Group and is a Senior Lecturer at the University of Melbourne.

Martin Gibbs is a lecturer for the Department of Information Systems at the University of Melbourne.

Darren Edge obtained both his Bachelor's and PhD from the University of Cambridge. He is now a researcher in the HCI Group at Microsoft Research Asia. 

Stefan Agamanolis earned his Bachelor of Arts in Computer Science from Oberlin College and both his Master's and PhD from MIT. After being Chief Executive and Research Director of Distance Lab at Horizon Scotland, he became the Associate Director of the Rebecca D. Considine Research Institute at Akron Children's Hospital.

Jennifer Sheridan holds a Bachelor's degree in Rhetoric and Professional Writing and Computer Graphics from the University of Waterloo, a Master's in HCI from the Georgia Institute of Technology, and a PhD in Computer Science from Lancaster University.Sheridan is a Senior User Experience Consultant and Director of User Experience at BigDog Interactive.


Summary

  • Hypothesis: The Jogging over a Distance system will provide insight into the design of computer systems with the purpose of facilitating a social experience in exertion activities.
  • Methods: Seventeen joggers were enlisted to go on fourteen runs.
  • Results: The authors took the data gathered and formed them into three themes which they turned into design dimensions.
  • Contents: Jogging over a Distance is a system designed to facilitate social interactions during exertion activities. The paper describes exertion activities as those that use technology and require intense physical effort to participate. A brief overview talks about some previous and related works, including the Nike+, Nintendo's Wii, Microsoft's Kinect, and the Sony Move. Unlike support systems such as the Nike+, Jogging over a Distance provides interaction throughout exertion rather than just afterward. Jogging over a Distance was designed this way in an attempt to better understand social interaction during exertion. The runs of seventeen participants were recorded and follow-up interviews were conducted. Using that data, the authors were able to compile design schemes based on three themes: Communication Integration, Virtual Mapping, and Effort Comprehension - all of which contributed to the social experience when using Jogging over a Distance (as seen in the figure below). 
    The relationship between the audio feedback and the physical output comprise communication integration.
    Effort comprehension means giving users a better understanding of what/how they are doing. In the case of this system, it gives heart rate data.
    Virtual mapping converts users' physical efforts to digital information that is shared with others in digital space.
    The authors discuss both the positive and negative aspects of each of the three design dimensions.


Discussion
From the title alone, I thought (with some excitement) this paper detailed what it would take to jog a distance between Europe and Australia, which would no doubt take some very extreme measures. What was actually in this paper was no less exciting, however.
Since the goal of the authors was to see if Jogging over a Distance would provide insight into the social interactions that occur in exertion, I'd say they achieved that goal effectively. For the most part, I felt their results were relatively solid, but I think they were lacking a little bit regarding the effort comprehension area. They admitted that heart rate may not be the best performance indicator.
I was interested in this paper because I enjoy a good run, though I don't usually go running with technology other than my cell phone. I had heard about products such as the Nike+, and I was interested to see what new technologies were being developed for physical activity. I'm more old-fashioned when it comes to running, so I never bother with music, but I was tempted to try out tech that would measure my performance in some way.
This paper doesn't really cover that, instead looking into how to improve the social characteristics of exertion activities. Normally, I would run by myself, and it really would be more of a run than a jog. So, I agree that jogging with others increases the length, speed, and distance of a run since when running by myself, I tend to wear myself out quicker due to a lack of pacing ability.
What I liked most after reading this paper was the fact that there is quite a bit of research in the field of exertion, a term for physical activity that I hadn't even known of prior. In addition, I liked the idea that people could run "together" despite differences not only in locations, but in running abilities. As was stated in the paper, previous studies have shown that social interactions improve the effects of physical activities. Today in America, this research is important in combating obesity. Technology has made lives easier and lazier. Although exertion games may make physical activities seem easier by adding that social aspect, there is an observable correlation that the actual physical activity is done longer, better, and with greater frequency for many people when done socially. Since the work in this paper still looks to be preliminary, the results gleaned from the test runs show good promise for future research. Even so, I would have liked to see more runs with more participants in other geographic locations tested. Regardless, I'm very eager to see what comes from such research. It'd be pretty awesome to have a totally interactive virtual image of jogging partners running side by side when in actuality an ocean or two spans the distance between the joggers.

Friday, September 16, 2011

Ethnography: Initial Week

Group Members:
Daniel Aninag
Xandrix Baluyot
Will Hausman
Jonathan Wiese


Preconceptions: 
Honestly, I thought the idea of Muggle Quidditch, as it is referred to on their student activities site, was silly. I didn't think it would be very entertaining to watch or play since no actual flying would be involved. Quidditch seemed like an imitation of various other sports, and without the magic, I didn't see why one would choose it over any other. Also, I thought everyone on the team would be a die hard Harry Potter fan who would constantly quote from the stories.

First Encounter: 
I realized that, for the most part, the students on the Quidditch team were just like anyone else taking classes at A&M. They were friendly, spirited, and really enjoyed what they were doing. None of the ones I came across seemed so obsessed with Harry Potter, and I found that the majority were quite athletic. Quidditch is a demanding sport, especially for the positions of seekers and snitch. A lot of continuous running and sprinting is required to be an effective snitch and seeker. I was only able to attend one event this week, but so far I am actually quite excited to be doing my ethnography on the A&M Quidditch Team.

Thursday, September 8, 2011

Paper Reading # 5: A Framework for Robust and Flexible Handling of Inputs with Uncertainty

Reference Information: A Framework for Robust and Flexible Handling of Inputs with Uncertainty by Julia Schwarz, Scott E. Hudson, Jennifer Mankoff, and Andrew D. Wilson.
UIST 2010 New York, New York. 

Author Bios: Julia Schwarz is pursuing a PhD at Carnegie Mellon University.

Scott Hudson earned a PhD in Computer Science at the University of Colorado. He is currently a Professor in the Human-Computer Interaction Institue at Carnegie Mellon University.

Steven M. Drucker is a Principal Researcher at Microsoft Research.

Jennifer Mankoff earned her PhD in Computer Science at the Georgia Institute of Technology. She is an Associate Professor at Carnegie Mellon University.

Andrew Wilson earned a Bachelor's degree from Cornell University and both his Master's and PhD from the MIT Media Lab. He is now a Senior Researcher for Microsoft.


Summary

  • Hypothesis: Handling uncertain inputs, such as approximations via pen/touch, well will lead to better human computer interactions. 
  • Methods: Six case studies were conducted to test the framework.
  • Results: Overall, the studies showed that the framework was flexible, could interpret multiple inputs, and could handle such inputs robustly. 
  • Contents: This paper tells of a lack of correct handling of uncertain input. It also compares conventional input with uncertain input. After detailing their framework, the authors explain the six case studies. The first three case studies focused on improvement of touch interaction through smart window resizing, ambiguous and remote sliders, and tiny buttons. Case studies four and five were for smarter text entry, while the final study looked into an improved GUI for the motor-impaired. Below is a picture of touch input being handled by the framework: 

Discussion
Though I do think a flexible handling of inputs is important, I did not find this paper to be particularly appealing. It was interesting to note, however, that the authors opted to include a test involving GUI for the motor-impaired. I think that in today's society, disabilities are too often overlooked and shrugged off. To me, developing tools for those with impairments can help create a better understanding of disabilities. That in turn would allow better technologies to be made with the purpose of reducing or even eliminating the disabilities.

Paper Reading # 4: Gestalt: Integrated Support for Implementation and Analysis in Machine Learning

Reference Information: Gestalt: Integrated Support for Implementation and Analysis in Machine Learning by Kayur Patel, Naomi Bancroft, Steven M. Drucker, James Fogarty, Andrew J. Ko, and James Landay.
UIST 2010 New York, New York. 

Author Bios: Kayur Patel is working on a PhD in Computer Science at Washington University.

Naomi Bancroft works for Google. She recently graduated from Washington University.

Steven M. Drucker is a Principal Researcher at Microsoft Research.

James Fogarty holds the position of Assistant Professor of Computer Science and Engineering at Washington University.

Andrew J. Ko is also an Assistant Professor at Washington University in the Information School.

James Landay is a Professor at Washington University who previously worked for Intel. 


Summary
  • Hypothesis: By using Gestalt, a development environment, developers will find it easier to apply machine learning.
  • Methods: Eight participants were recruited to test Gestalt by using a baseline similar to MATLAB along with Gestalt to find and debug errors. 
  • Results: The participants preferred Gestalt and were able to more effectively locate errors with Gestalt.
  • Contents: The paper begins with a brief description of the machine learning process. Then it goes on to describe what Gestalt is. Through implementation and analysis, programmers can manage classified pipelines and visualize computed data, respectively. The combination of Gestalt's features allows it to serve as a general-purpose tool for developers.

Discussion
I liked the concept of a development environment that aids in the incorporation of machine learning. Although I don't really have any experience in that field, I think such a development environment would still be an effective tool for the general programming population. Developers often have to deal with demands on a tight timetable, and something like Gestalt can help them meet those demands in a timely manner. The work in this paper can pave the way for such environments.

Tuesday, September 6, 2011

Paper Reading # 3: Pen + Touch = New Tools

Reference Information: Pen + Touch = New Tools by Ken Hinckley, Koji Yatani, Michel Pahud, Nicole Coddington, Jenny Rodenhouse, Andy Wilson, Hrvoje Benko, and Bill Buxton.
UIST 2010 New York, New York. 

Author Bios: Ken Hinckley is a Principal Researcher at Microsoft Research who received his PhD from the University of Virginia.

Koji Yatani is working on his PhD at the University of Toronto.

Michel Pahud earned his PhD from the Swiss Federal Institute of Technology. He is under the employ of Microsoft Research.

Nicole Coddington received her Bachelor's degree in Visual Communication from the University of Florida and currently works for HTC as a senior interaction designer.

Jenny Roddenhouse earned a Bachelor's degree in Industrial Design from Syracuse University. She is now a Microsoft experience designer.

Andy Wilson received his Bachelor's from Cornell University and went on to earn both his Master's and PhD from the MIT Media Lab. He is a senior researcher for Microsoft.

Hrvoje Benko earned his PhD from Columbia University and is a researcher at Microsoft Research.

Bill Buxton holds a Bachelor's degree in Music from Queen's University. He is a Principal Researcher at Microsoft Research.

Summary
  • Hypothesis: It was proposed that unimodal pen, unimodal touch, and the new features resulting from the combination of pen and touch would enhance user experience by incorporating natural tendencies. 
  • Methods: Initially, a design study was conducted that evaluated how the eight participants worked with a pen and paper notebook. After the design study, another study involving eleven participants was conducted to test the techniques derived from the results of the design study.
  • Results: From the design study, the researchers observed that the participants generally had clearly defined roles for pen and touch. The following testing showed that the testers adapted quickly to unimodal pen/touch. Although they had to be told some of the functions of combining pen and touch, it was not difficult for them to learn those functions.
  • Contents: The designs of a prototype Microsoft Surface application, Manual Deskterity, is described in this paper. A study with pen and paper provided insight into behaviors involving the use of touch and pen. Afterward, testing of the demo application was conducted. It was stressed that the pen would be for writing, touch would be for manipulation, and the combination of the two would yield new possibilities. These included stapling, cutting (as with an X-acto  knife), creating carbon copies, and brushing (where the user can make a brushing tool out of whatever is on the page).

Discussion
So, of course I found this to be very similar to the previous reading. The difference in focus is clear, though. This paper was more conceptual since it did not seem to produce anything definite, which was stated a couple times by the authors. Although I don't think their hypothesis was truly justified, the results of the studies did not refute it. Overall, people are used to having pens solely for writing and using touch for manipulation. Thus, it comes as no surprise that applying that concept to Manual Deskterity would only be logical and naturally accepted. I see this as an improvement to current tablet software, but outside the artistic community, I am not sure how much use such a product would garner.

Paper Reading # 2: Hands-On Math: A page-based multi-touch and pen desktop for technical work and problem solving

Reference Information: Hands-On Math: A page-based multi-touch and pen desktop for technical work and problem solving by Robert Zeleznik, Andrew Bragdon, Ferdi Adeputra, and Hsu-Sheng Ko.
UIST 2010 New York, New York. 

Author Bios: After receiving both his Bachelor's and Master's in Computer Science (both from Brown University), Robert Zeleznik became a research director at Brown University.

Andrew Bragdon also completed his Bachelor's and Master's in Computer Science (as well as a Bachelor's in Economics) at Brown. He is now a PhD student at Brown.

Ferdi Adeputra is an analyst for Goldman Sachs. He is studying Applied Mathematics and Computer Science at Brown.

Hsu-Sheng Ko is a researcher at Brown University.

Summary
  • Hypothesis: Learning and working with math will become more efficient with the use of CAS tools in an environment very similar to that of pen and paper.
  • Methods: The authors of this work put participants (9 undergraduate students) through trial runs of the Hands-On Math system. The goal was to discover the right direction to head in development, so they explored many functions including page manipulation, multi-step derivation, graphing, and web clipping. Widgets, gesture recognition, and palm detection were also tested. 
  • Results: The nine participants had mixed reviews on some of the features, such as paper folding and TAP (touch-activated pen) gestures. However, they also expressed an overall positive outlook on the potential of the Hands-On Math system.
  • Contents: The majority of this paper details the aspects of the Hands-On Math system, followed by the pilot evaluation conducted by the authors. At the time, the system could not handle anything beyond high school math. Folding in the system, unlike actual paper, serves to create more space on the same page. As for gestures, there are the under-the-rock menus and TAP gestures. An example of an under-the-rock menu would be a trash icon appearing after dragging away a page. PalmPrints was created for this system to allow access to specific commands that are activated with the palm (shown below).

Discussion
I would have to concur with the consensus of the participants that there is great potential for this system. With that said, it seems that the goals of the authors were met. However, I think that the number of participants could have been much more. It was expressed that several directions were to be tested in order to better develop the system, but the lacking size of participants contributes to my doubts in the accuracy of their initial evaluation. Still, a fully functional version would have a great impact with both students and professionals who work heavily with math. It may be possible to adapt such a system to aid programmers. I know there are those who prefer writing out pseudo-code on paper, so it will be useful if pseudo-code jotted down on the writing pad could be converted, at least partially, to actual computer language using the built-in SDK's. From there, the converted code can be uploaded onto a computer where the programmer can fill in any gaps. This would also hopefully keep handwriting from degrading too much.

On a side note, the issues with physical dexterity reminded me of the cumbersome control schemes RTS beginners employ. Eventually, they find out about hotkeys.

Thursday, September 1, 2011

Paper Reading # 1: Imaginary Interfaces: Spatial Interaction with Empty Hands and without Visual Feedback

Reference Information: Imaginary Interfaces: Spatial Interaction with Empty Hands and without Visual Feedback by Sean Gustafson, Daniel Bierwirth, and Patrick Baudisch.
UIST 2010 New York, New York. 

Author Bios: Sean Gustafson studied computer science in Canada, earning Bachelor's and Master's degrees in Computer Science from the University of Manitoba. Currently, he is a PhD student at the HCI lab of the Hasso-Plattner Institute.

Daniel Bierwirth holds a Bachelor's degree in Computer Science and Media from Bauhaus University and a Master's degree in IT-Systems Engineering from the Hasso-Plattner Institute. He is a co-founder of Matt Hatting and Company UG and the Agentur Richard GbR.

Patrick Baudisch is the chair of the HCI lab at the Hasso-Plattner Institute. He earned his PhD in Computer Science from Darmstadt University of Technology.

Summary
  • Hypothesis: Users of Imaginary Interfaces can interact spatially with an acceptable degree of effectiveness using only their imagination.
  • Methods: The authors conducted three studies to test users' abilities to interact spatially using Imaginary Interfaces. The first study focused on simple drawings. In the second study, users drew a simple design and then pointed to specified locations on the drawing. Lastly, the third study investigated the accuracy of using coordinates based on finger (index and thumb) lengths.
  • Results: It was shown in the first study what short-term memory (referred to as visuospatial memory by the authors) was sufficient enough to handle simple shapes using Imaginary Interfaces. In the second study, interference with the visuospatial memory was caused by having the user to physically turn. Although this caused higher errors, the errors were minimized by the use of one hand as a frame of reference. The third study demonstrated the increasing errors as users had to point further away from the coordinate hand.
  • Contents: Gustafso, Bierwirth, and Baudisch wanted to test a system that completely minimized mobile spatial interaction. They explained that while many devices are already ultra-portable due to their tiny screen sizes, that portability was still limited by a screen. They posited that having no screen would lead to ideal on-the-go spatial interaction.

Discussion
I was quite surprised when I read the premise of this paper. Even with such gaming equipment as the Wii, Xbox Kinect, and Playstation Move, I didn't make the connection that similar technology was already being developed for more practical, mobile applications. I have to say, though, that relying on imagination for spatial interactions definitely has its limitations, at least initially.

Imaginary Interfaces is obviously still in its infancy, but I still found this paper fascinating. Besides some English errors (understandably, they're based in Germany), it was hard to stop reading. The user studies seemed well thought out and, for the most part, proved their hypotheses. At the rate they are going, I'd say a public release of this technology will come much sooner than the world expects.

One thing about Imaginary Interfaces that really intrigued me was the fact that it implemented the natural gestures that people make in everyday conversations. I think people will have little to no problem integrating such a device into their daily life. Who knows, maybe this would lead to better memory retention and expanded imaginative capabilities.

Paper/Book Reading # 0: On Computers

Reference Information: On Plants supposedly by Aristotle (edited by Jonathan Barnes).

Author Bio: Aristotle is well known as a Greek philosopher of ancient times. He wrote on many subjects, ranging from poetry to biology.

Summary
  • Hypothesis: Plants have at least part of a soul. 
  • Methods: The author observes the nature of various plants and compares them to animals and people. 
  • Results: The paper did not produce anything conclusive, per se, but it served as a thorough analysis on several plants. 
  • Contents: Through his observations, the author is able to describe in great detail the features and workings of a wide variety of vegetation in an attempt to show that plants exhibit signs of a soul. 

Discussion
Despite the depth of detail presented in this work, I believe it fails to end conclusively. The writing is rather remarkable and very informative for Aristotle's time. However, I find that much of the information is now either common knowledge, debunked, or rectified.  I also thought the writing to be quite repetitive, but it was probably much more interesting back then.

While reading the various comparisons made between each type of plant, I thought of how that paralleled not necessarily today's computers, but rather the programs housed within. Each program was literally made for a purpose. Some may be as mundane as a temporary data storage, while others (like Adobe Photoshop) are vastly complex and serve as multimedia creation and editing tools. The capabilities of these programs far exceed the abilities of, not only animals, but humans (which is why we create these programs in the first place). Thus, computer programs can be likened to living creatures. In fact, the idea of computers/programs having a soul has often been explored. In the popular video game series Halo, for example, the AI's have a personality of their own, leading one to assume they have a soul. I would very much want to witness the birth of such an AI, but I do not know if humanity will even come close to that technological advancement (at least within my lifetime).

Introduction # -1


E-mail: xandrix@tamu.edu
Class of 2011 (5th year senior >->)

Why am I taking CSCE 436? 
Aside from fulfilling an elective requirement, this course seems like it will help me better understand people.

What experience do I bring to this class? 
I've been an avid computer/video gamer for much of my life. I understand the factors that make a great game. Additionally, I've had quite a bit of time in leadership positions since freshman year. Being a member of the Corps of Cadets and serving as treasurer for a couple student organizations has helped me develop my leadership style along with my ability to work with others (also, my social awkwardness, though still very present, has diminished slightly). Working well with others is essential to my survival in this class.

What do I expect to be doing in 10 years? 
I see myself either serving in the Canadian Forces (as probably a Signals Officer) or working on some amazing computer game. 

What do I think will be the next biggest technological advancement in computer science? 
That is a tough question...  I haven't read too much about recent developments, so I really don't have an idea at the moment. Perhaps it will soon become possible for complete thought interaction between humans and computers. 

If you could travel back in time, who would you like to meet and why? 
I would like to meet Jesus simply because He is my Lord and Savior.

What are my favorite shoes and why? 
I never really cared much about what kind of shoes I wear. However, I would say that my senior boots are my most cherished footwear. Not only did they have a hefty price, but the process of getting to that point of putting them on and then wearing them proudly on campus is something that just cannot be matched. 

If I could be fluent in any foreign language that I'm not already fluent in, which one would it be and why? 
It would be Russian because of the Red Alert series of Command and Conquer.

Interesting fact?
I was in the Fightin' Texas Aggie Band. It was awesome.