Demystifying Computer Science: An Approach Using Interactive Multimedia
New York University, Gallatin School, MA Thesis

| TITLE PAGE | TABLE OF CONTENTS | BIBLIOGRAPHY | AUTHOR'S HOME PAGE |


Chapter 6: Phase I: Exploratorium

Generating Interest
Observations
On-line Survey: Audience Analysis
On-Line Survey: Software Evaluation
Conclusion


Generating Interest

The Multimedia Playground exhibit attracted lots of media attention this year and as a result it was well attended on weekdays and packed on the weekends. Computer users of varied backgrounds and ability levels were represented in the exhibit's audience. The people who tried out my program during the run of the exhibit included school children, artists, retirees, a firefighter and a rocket scientist (a senior scientist at Lockheed/NASA).

Because several displays in this show were thematically similar to my project I had the opportunity to observe the effect of delivery medium and approach on audience reaction to the subject matter. The types of exhibits in this show could be sorted into three categories: static displays, interactive computer exhibits, and displays supported by a human guide (an Explainer or other expert). I observed that these exhibit types received varying levels of attention. Of these three, the traditional static displays received the least attention and the Multimedia Playground used only a few displays of this type. The plexiglass enclosed, disassembled Macintosh II computer located to the right of my project, was one such traditional exhibit and it generated very little interest. This computer display resembled an exploded parts diagram. The computer was disassembled but still running. A monitor connected to the CPU displayed a loop of graphic diagrams and photographs which labelled the various chips and devices plugged into the motherboard. No input was needed from the user to make the computer operate this way and no moving parts were visible. In fact, the reaction to this see-through computer supports my thesis: without moving parts, and because of the minuscule size of most computer components, it was impossible to infer anything about how this computer worked through direct observation. Viewing the parts of the computer produced no intuitive understanding of the machine. Very few people even noticed that the computer was "still alive" - when I pointed out this display and explained its significance, it was clear that most people had no idea that the exposed computer was operating. The only evidence I could use to prove that the computer was still running was the monitor's changing slide show and the periodic flashing of a green light on the hard drive.

The Multimedia Playground was an exhibit about multimedia, so naturally there were many interactive computer exhibits besides mine. The multimedia exhibits attracted a fair amount of attention but since there were so many computer stations, and since the Exploratorium generally hums with activity, it was rare for an individual to spend more than a couple of minutes using my program. Most people who gravitated to the computer stations seemed to have a very short attention span - if their curiosity wasn't capitalized on immediately they would just drift away. Although this was true of a great majority of the people there were some exceptional users who remained involved with my program for thirty or even forty minutes. Perhaps in a home environment the user would spend more time, but in the exhibit environment there was too much opportunity for distraction. Attention span seemed to be a problem with the commercial CD-ROMs shown in the exhibit as well. In a museum environment the expectation is generally that an individual will only spend a few minutes - maximum - with any one display; the attention demands placed on the user of a CD-ROM are usually more strenuous. The depth of material available through educational multimedia is generally considered a positive aspect of the medium, but in a museum environment this depth of material creates a conflict: museum goers expect that they can digest an exhibit in a few minutes but content rich multimedia programs demand a greater time commitment from the user before they can produce substantial learning benefits.

Many of the displays at the Multimedia Playground received the occasional support of volunteers or explainers. However, only one, the "computer dissection," relied exclusively on human performance to convey information. By dissecting a computer on the exhibit floor the Exploratorium was applying a technique they've used for many years to present optics - Exploratorium Explainers regularly present "Cow's Eye Dissections" to show museum goers the hidden parts of the eye. On weekends I presented some of these live computer dissections. During these dissections the presenter would open up the CPU case of a PowerMacintosh 660AV computer, then, piece by piece he would remove all of the computer's internal parts while explaining each one's significance. When I dissected a computer, the spectacle always produced a small crowd of curious exhibit-goers; many people have never seen the inside of a computer before and are intrigued by the discovery that the "guts" can be removed. When Jason, one of the Exploratorium staff members, took the computer apart he did a crowd-pleasing high-energy routine. By the time Jason had taken the hard drive out there was usually a crowd of people three-deep standing around the table trying to see. Even in a museum environment - or perhaps especially in a museum environment - it's impossible to duplicate the excitement of a "live" show with traditional or even interactive exhibits. With that said, it must be accepted that the processes I focused on could not be removed from a box and held in a presenter's hand; the subject I attempted to expose could not have been presented without the aid of some artifact like the program I designed.

One Computer Magic user suggested that my program might be used as a supplemental aide that an instructor could use to demonstrate computer concepts rather than as a stand-alone application. Most people did not benefit from the the depth of material built into the program. Although I wrote text descriptions of the computer processes the text was usually skipped because users did not want to spend their time reading from the screen. I'm sure that the problem was magnified by user expectations of the Exploratorium environment. The Exploratorium's audience is geared to "doing" rather than reading or watching; they are prepared to be active participants rather than passive readers. Users, in general, were more responsive to the activity - oriented portions of the program.

Observations

One of the unexpected behaviors I observed was that users, particularly kids, tended to spend an inordinately long time on the "Draw your own picture" screen of my program. This one screen uses a very simple black and white paint program to let the user draw one pixel at a time. The program is much simpler than the drawing programs available to most people. When there was either a supervisor like myself, or a parent with a child it was more likely that the user would proceed beyond the "Draw your own picture" screen. Unsupervised, however, there were many users who drew their pictures and then wandered away. I think this was partially a result of the environment at the Multimedia Playground; people came to this exhibit wanting to create something, and they were disappointed with the CD-ROM projects that invited observation of other people's work but didn't permit user creativity. Many took the opportunity presented by my program to create a picture - regardless of how simple it seemed.

Navigation techniques varied, probably due to the audience's varied prior experience with software. At least one user (approximately nine years old) had trouble finding the backwards and forwards buttons for linear navigation - although their placement would seem obvious to those who have used other educational multimedia programs. This particular user was able to access all screens of the program by going directly to the map screen and navigating from there. Another notable behavior, one that seemed just about universal, was the disinclination of users to click on the "help" button or to ask me for help when I was sitting next to the computer with my "volunteer" button on. I often found myself offering help before it was asked for.

I noticed that many people were bewildered by the "Paint-by-Numbers" screen. Perhaps it was because there are too many processes on this screen that only work if they're done in the correct sequence. Most users weren't successful enough with this activity to understand its significance. When I explained this exercise to users I tried to emphasize for them how important the graphical user interface is to computing. This activity suggests how difficult it might be to draw a picture on the screen without such an interface; without a GUI and a mouse one would need to know how to address each pixel on the screen individually in order to change the colors of those pixels.

A father and son, I observed, spent a long time reasoning through the XOR gate puzzle and finally got it right. Working together these two really seemed to understand the purpose of the problem and enjoyed solving it. After they had finished, the son told me that he was actually a Hypercard developer himself and that he is working on an informative stack on dinosaurs. That conversation confirmed for me that age is not an indicator for the understanding of computing concepts. This ten year old clearly had a better understanding of programming than most of his elders.

On-line Survey: Audience Analysis

In addition to direct observation of users, my evaluation methodology included use of an "on-line" questionnaire. The data collected from this questionnaire was then exported into a database so that I could sort and compare answers. The database records appear in appendix E. Over 150 users answered the questionnaire, though many did not answer all of the questions. I've selected some highlights and patterns from this data to present below.

Question one of the survey asked users to "Name one thing that computers can't do that you wish they could." This first question was meant to be a general interest grabber - a question that would get the user thinking about computers, their place in today's world and how that may be changing. Although I know many users were simply being flippant, it seemed that many people had practical wants from computers. It was remarkable how many people wished computers would do the cooking and cleaning or their homework for them. It would seem that many people are still hoping that the computer will be a labor saving appliance. There were also users who had specific requests for improvements in the technology, one wanted computers to "be more transparent to me! [and] be compatible with each other" another wished they could "be more indestructible so that you can take them to work in a hot car." Several people voiced interest in having computers that could recognize voice commands. Some respondents merely wanted computers to be more affordable. One Graphic designer - and I can sympathize with this - wished computers would just "stop crashing."

There were also intellectually ambitious responses like the one from a newspaper editor who wanted a computer to "anticipate your needs by understanding human thought patterns." An elementary school principal wished computers could "draw subjective conclusions" - although I wonder what a computer might do with such conclusions. The next question was a follow-up question to question one; referring to question one it asked "Do you think computers will ever be able to do that?" Responses to this question revealed that people have some pretty high expectations for computers in the future. It didn't seem to matter whether the desires were utilitarian, like wishing the computer would cook dinner, or academic, like wanting computers to operate at a higher cognitive level, 66% expect that computers will do in the future what they can't do now. Only 34% responded "no," a computer will not be able to do that[5]. It would seem that confidence in technology is very high at the moment.

A whopping 84% of those responding said that they have a computer at home; even more - 86% - said they use a computer at school or at work. Only 7% responded that they do not use a computer at work or school, nor do they have one at home[6]. Of the respondents to the question "How would you describe your level of knowledge about computers?" 14% answered that they are able to write programs, 49% answered that they are experienced users, and 37% said they are beginners[7]. According to these statistics a majority of those who tried my program had extensive experience with computers (63%) - a fact that shouldn't be surprising since so many of these people have computers at home. A comparison of the number of computer owners to the number of experienced users suggests that somewhere around 20% of the computer owners are just beginning to use computers[8].

Question seven asked: "Are you interested in knowing more about how computers work?" Only 3 out of 76 people responded negatively to this question. Of the 44 people responding to the question "did you learn about binary mathematics in school" 15 said no, 21 said yes, 8 didn't remember. Although the size of this sample is small, it does suggest that quite a few people have a least the foundation to learn more of the details associated with digital processing.

Users between 2/26/95 and 3/12/95 were asked "If you have ever tried to find out more about how computers work describe how you did it below:" Based on the responses to this question, such quests have resulted from a practical need to know how to use a computer rather than from the abstract pursuit of knowledge. There did not seem to be one dominant search method. Many people went the route of taking a class, some attempted to read computer manuals, others asked friends. Several people said that they had come to the Exploratorium for this very purpose.

Some information seekers took the discovery learning approach. One example of this is the sixth grader who rummaged through the files on a computer to see what they were. Evidently he was highly motivated and had an interest in programming. A fourth grader wrote: "My dad and I are going to take apart an old computer and see how it works." An engineer said that he "Took one apart" to learn more - and so did a restaurant manager who said he took his apart and "sat down with one of those `how does this work' books, and figured it out." Based on my experiences as a computer support professional, the discovery method most often used by adults was evident in this response: "I've spent hours screwing up and unscrewing up my own mistakes. I also read manuals, but only under duress!" One art student's response reflected practical goals but with a philosophical twist:

I'm interested in learning as much as possible that will enable me to manipulate the tools the way I want to. If that means actually learning more about how computers work, so be it. I do get frustrated because the more I learn the less I know, and the learning curve seems so enormous sometimes. My goal is to become a multi-media programmer/artist and to use the technology in a way that empowers people (so much technology seems to do just the opposite).

Not a single user responded that he would seek out a computer program to learn more about computers. Does this reflect a lack of awareness of the available programs (there are several), a rejection of the existing programs, or a reluctance to use electronic media as a primary tool for information retrieval?

On-Line Survey: Software Evaluation

The response rate for the evaluation questions asked after program use was far lower than the rate of response for the pre-use questions. In all, only twenty-eight people responded to the evaluation portion of the questionnaire - and not all of their responses were usable. Most users walked away from the computer station after spending only a few minutes trying out my program.

The first of the post-use questions asked "What part of the program did you like the most?" At least three people said that their favorite part of the program was the graphics section or the drawing module. As I observed above, some people spent a long time drawing black and white pictures in the program's first exercise. Another user liked "The laughter at my mistakes" and "The intuitive nature of the examples." The laughter refers to the sound effects used to indicate an incorrect choice in the XOR activity module. Several people said that they liked the "overall simplicity" or the "simple format." Others said that they liked the "written explanations" or the "clear explanations." It was particularly gratifying that one self-identified beginner wrote that he especially liked "the organizing of program instructions in logical order."

After asking which part of the program was best liked, I also had to find out which parts had been difficult for the users. Question two asked "Which part(s) of the program did you find confusing?" A surprisingly large number of people commented that they found the "pixel stuff" or the encoding section confusing; I think these people specifically had trouble with the "Paint-by-numbers" screen. Based on my observations the interface for this activity seemed to present navigational difficulties unless the users were provided with additional guidance. Five people found either the logic gates specifically, or the hardware section in general, confusing. One user said that the section on logic gates was helpful but he was "still not clear on how they are used on a large scale."

Several users complained of navigational difficulties, and some suggested improvements. One practical suggestion came from a graphic designer who commented: "Very nice job, but I found it odd that you had to get `help' for any sort of instructions. Why not just have the instructions pop up?" Similarly, another user said she found "Having to use separate boxes for help and info" was confusing. A poli-sci major thought that I should have made the programming activity more flexible because, he wrote, "it appears that you only have two possible programs. Barring that, you should at least have the written program enacted by the computer after being done on a step by step level so that the casual user can see the logic flow behind each computer action." I think it would have been interesting to see the actual code which corresponded to the pseudocode used in the activity as he suggests, but this material seemed too detailed and technical for the target audience. During a brief interview a user told me that the programming section should provide some indication of what kind of feedback to expect when the task is successfully completed. This referred to a discrepancy between feedback mechanisms in the programming activity vs. the XOR activity. In the programming exercise if you put the wrong program line in a slot it will snap back to its original position; however, if you put the wrong logic gate in a slot of the XOR activity the gate may remain where you place it depending on several conditions. This same user said he wished that the encoding section had included color in addition to black and white so that you could see how that affected the number of bits required for storage in memory.

Questions 3 and 3a asked "Is there something about computers that you know now that you didn't know previously?" and "If so What?" Several people responded that they had learned what logic gates are - even though several other people replied that the part about logic gates was confusing to them. An aviation inventory planner had learned about "the organizing of program instructions." A research RN responded that she had learned "encoding, how graphics are translated into computer language," which was best response I could have hoped for.

Questions four and five asked users to compare programming and encoding with other familiar processes. I reasoned that having users uncover appropriate metaphors for these processes would demonstrate that my software was effective in generating new learning. Program users would indicate whether they understood the concepts by attempting to describe them in terms of processes and objects they're comfortable with. In retrospect I realize that the two questions meant to produce such analogies were worded badly; as presented they seemed to be requesting a yes or no answer rather than a description or a process. Fortunately most respondents did not simply respond with "yes" or "no."

Many of those responding to these questions compared programming and digital encoding to processes that I never would have considered (e.g. "Gardening," "raising kids," "understanding pathophysiology," "rhetorical logic," "cooking") but it was difficult to determine from the database which of these people were really responding to structural similarities as I had intended, and which users were being facetious. Did the person who said programming reminded him/her of cooking mean it in the way that David Harel meant it when he compared baking to computer processes? It was impossible to determine from the limited information I received in the survey.

On the other hand, some of the responses clearly presented valid metaphors. For example, several users said that digital encoding was reminiscent of translating languages. Since binary encoding does involve a kind of translation from one symbolic form to another, this comparison rings true. One person compared programming to flow charts and said that digital encoding reminded him of "color by number." Two respondents wrote that digital encoding reminded them of knitting. I had never considered the binary nature of knitting, but the parallels are obvious; in knitting there are only two stitches: knit and purl. From these two basic components the knitter can create very complex patterns, then the patterns can be joined together in a myriad of designs to create different objects. The metaphor is particularly interesting because it suggests a way of describing digital encoding to an audience that is unfamiliar with most of the terms used in the field of computer science. What I also found interesting, in light of the embedded computer/brain metaphor, was that not a single user suggested that either of these processes was similar to thinking or other mental processes.

Question six asked users: "What else would you like to learn about computers?" One ten year old wanted to learn "How to make games for my computers." Several people answered that they'd like to learn more about programming. In some cases the responses focused on practical needs; for example, a retiree replied "I would like to learn how to use communications network and any other skills necessary to just keep up." The same person, a self identified computer beginner, said that Computer Magic had provided a "Basic idea of how [computers] work."

Conclusion

One can extrapolate from my observations that the various exhibit media have differing capacities to attract attention in the museum environment. The methods of presentation can be ranked hierarchically from best attention grabbers to the least magnetic exhibits: the best method for generating excitement is clearly a live demonstration; next in the ranking are interactive computer interfaces; static, traditional displays that do not invite hands on interaction from the exhibit-goer draw the least attention. Although multimedia programs potentially offer greater depth of information they are not necessarily better than other display media in this environment. In the museum, the audience tends to "graze" displays rather than spend long stretches examining a single exhibit. This pattern suggests that there is a need to create multimedia displays that can be appreciated on two levels: the information initially presented should be clear enough for users to absorbed the concepts in just a few minutes. Once that is accomplished the user should have the option to explore greater depths. Multimedia kiosks that require a longer period of engagement before delivering an educational payoff are likely leave users disappointed. My observations indicated that the graphics activities in Computer Magic were intuitive enough that the user gets an immediate educational payoff. Unfortunately, many of the program's users wandered off before they were able to explore the depth of information built into the software.

As I noted in the chapter on evaluation design, there were several problems with the evaluation of Computer Magic executed during the Multimedia Playground exhibit. Short attention span, problems obtaining complete data-sets, and poor phrasing of survey questions all combined to suggest that a second more controlled evaluation was necessary. My intention had been to determine whether the Computer Magic program was effective at generating an understanding of computing processes; however, I realized that this determination could not be made without more extensive interviewing and observation - methods which were inappropriate in the Exploratorium's chaotic environment.

The occurrence of learning is related to user expectations and environment. User expectation in a museum environment is that interaction with any exhibit will produce an educational payoff within just a few minutes; the expectation of students in a classroom is that learning will occur on a different time scale. In the classroom students are routinely engaged in curricula that deliver their educational benefits over a longer period of time. Interactive multimedia designed for a school environment has the luxury of allowing information to unfold gradually. Design of instructional materials must take these differences into account.

Next Chapter

Copyright © 1996 by Lisa H. Weinberg