Category Archives: Weekly Reading Summaries

Week 12

Collaborative Accessibility: How Blind and Sighted Companions Co-Create Accessible Home Spaces

Summary:

The authors of the paper expanded the concept of abilities design to not only study the specific user group, but also how that design is collaboratively used in an environment with other users.  The authors found that tools/user interfaces are not only used by the visually impaired user but also used by their partners who are non-visually impaired.   The interaction goes beyond just the interface for a blind person and also impacts their relationships.  The article calls this collaborative accessibility where, “family members, friends, acquaintances, or strangers help (or hinder) accessibility.(3)”

For example, one blind participant added Braille indicators to the microwave, but there family found that it interfered with their ability to use the microwave and had to remove it.

The researchers and paper also had additional evidence of where HCI has been only focused on creating environments and user interfaces for blind people to be independent, and that further research had to be done on how they can collaborate better in environments with the non-visually impaired.

Reflection:

This was an eye opening research paper.  I too had only viewed HCI and the visually impaired as a way to help them live independently; however, this is a narrow focus and HCI needs to adapt a more social outlook.

Relationships between humans can be very difficult as we all have our own unique perspectives formed from the different tactile sense we have and our unique past experiences.  The researchers made great points about how sharing in an experiences is an important part of relationship building. HCI should focus on how can we design experiences for users with different impairments to both share in an experience.

I can not say that I understand the challenges of living with a visually impaired partner, but I have also faced similar situations.  My husband is very visually impaired without his glasses.  If I don’t find his glasses and place them buy his bed side, he won’t be able to find them in the morning.  I can relate to the staging of the environment mentioned in the article.

Knowing this and the already struggles of a relationship with another sighted person, the article made me value my own relationships.  I felt very sad, as I couldn’t imagine not sharing the joy in watching a movie together.

I’m glad that I read this article and now have a new appreciation for my own experiences and inspiration to focus future HCI on social experiences to improve relationships.

“Just Let the Cane Hit It”: How the Blind and Sighted See Navigation Differently

Summary:

“Never trust sighted people.”  This is a quote jokingly said by a participant of the study; however, it’s provides great framing for how the visually impaired really feel when being guided by a sighted person.

The authors of this paper, tried to solve just that.  How can a sighted person guide a visually impaired person better?  Sighted people often do the following behavior that is not productive and even dangerous when guiding the visually impaired.

1) Grap them –  This is very dangerous as it can cause them to fall, also sighted people need to realize that blind people are suppose to bump into objects using their sticks so that they can establish boundaries.

2) Shouting at them/ambiguity– Blind people find this off putting and often the terms shouted our to ambitious to follow.  For example shouting, “over here!”

3) Guide users to wide open paths– Sighted people thing that having no obstacles is best for a blind walker; however, having barriers such as sidewalks or other obstacles help them oriented themselves better. It is much more difficult to walk in a straight line without barriers.

4) Timing/orientation– sighted people didn’t know when to give clues and often didn’t give correct direction.  For example confusing right/left and miscalculating exact steps/feet.

Researchers did find that participants were able to adapt to each other eventually and “speak their language.”  Meaning that over time a sighted person can learn to give better verbal clues.  The findings in this paper can then be applied in building assistive technology for guiding the visually impaired.

Reflections:

The article also focused on the social aspect of interactive in the same environment with the visually impaired.

This article gave good insight into how I, as a sighted person, can understand how blind people navigate spaces.  Understanding key aspects, such as not removing all obstacles can then be implemented in designing assistive technology.

By studying and interacting with your user base you can begin to construct their mental model and then apply this to your own research.  One important insight is that communication was key to learning both for the sighted and visually impaired person to adapt to each other.  This makes me wonder if technology will ever have the intelligence to adapt in real time and take corrective actions based off of previous mistakes and emotions.

It seemed that the sighted person had to make many judgement calls minute by minute.  Is this obstacle harmful or necessary?  Should I allow them to bump into the grass? Would a robot or other technology be able to make this quick situation judgement calls? Can it respond to the emotional distress or wants of the user?

For example, when Garmin was first widely used, it was a joke that it used to routinely send you through the ghetto. This is because it is often shorter to drive through the city then take freeways.  Garmin thought the shortest path was the best, but could not take into account situations which the driver may want to avoid.

Sharing is Caring: Assistive Technology Designs on Thingiverse

Summary:

As many users have faced unique challenges, a “do-it yourself” attitude has been embraced by the assistive technology community. This can lead to many people creating their own tools and modifications.  With the revolutions of the open-source community and at home 3-D printing, assistive technology has also progressed. The authors of this paper researched a popular website for opensource 3-D printing called Thingiverse.

The paper focused on what types of assistive technology was being developed and  who was developing this technology and why?

The authors analyzed 363 designs by 273 designers and found the following:

The category with the most designs was  tools for “medication management, which had 130 things of the total 363 and was dominated by pill boxes, bottles, dispensers and accessories like tops and dividers. ”  They also found that the next most popular was prosthetic limbs.

They also found that many of the designers were not like the users and that they were designing for and also developed for different reasons, ranging from research projects to wanting to help a friend.

Reflection:

http://www.thingiverse.com/thing:33815

For my reflection, I wanted to do my own research of Thingiverse.   I found it difficult to find many things even after typing in multiple search words, for example assistive, prosthetic, etc.

One of the coolest items I did find was the one armed nail clipper which is featured in the video above.  It’s a great simple and cost effective design.  It also could be universally used as it is difficult for any user to clip their own nails.  It could also be used for the elderly as many have difficult holding nail clippers and reaching their toe nails.

One great aspect to the Thingiverse community this that users can give direct feedback to the developers.  The developers can then directly communicate with the users and make tweaks and update the current design. Another great aspect that I’d like to point out is that 3D printing is a great way to prototype as it is cost efficient and can quickly create new models.

 

Week 11 (4/23)

Kane, Bigham, Wobbrock. “Slide Rule: Making mobile touch screens accessible…” (8 pgs)

Summary:

Touchscreens have become popular today an interface; however, for blind users they are unusable unless they have custom hardware or tactile overlays.   The students at University of Washington, developed an interactive way for blind users to interact with touchscreens.  Close your eyes and try to make a call using the IPhone.  I had never thought of how difficult this is, as touchscreens don’t have tactile feedback.

How could the students design a touchscreen to have intuitive gestures and responsive feedback for blind users to make phone calls and use other basic phone features?  The students designed an application called Slide Rule, where multi-touch gestures could control different phone features.  Overall users found that it was easy to learn and use, but had a higher error rate than using a tactile phone.

Reflection:

I found this study fascinating as I enjoy using touchscreen technologies, but had not considered how isolating they can be for a certain user group.   The students used some common gestures that most users already use today to control a touch screen .  They then added some additional gestures to map to new ways to control the interface.  Though the gestures are very intuitive, I  would still have  a difficulty time recalling what gesture to use and when, but I think the higher error rate using Slide Rule would minimize over time as the users became more familiar with the interface.  The feedback is almost instant with the gestures so even though the users have a high error rate they can quickly recover.

Another reflection about the Slide Rule is that, as a non-blind users I would also like to use this interface. We live in a multi-task world. In the sad reality, I’m often looking at my computer or walking and still trying to make a phone call while not looking at the screen.

The Slide Rule also reminded me of interfaces from minority report. Even though touch screens are direct manipulations and the interface in minority report is indirect, the gestures and navigation were similar.  Both used quick gestures to quickly move through lots of information and  focused on helping the user scan data quickly.

Minority-report-UI

Source:

http://www.spartanpr.com/2013-the-year-minority-report-becomes-real/
Wobbrock et al. “Ability-Based Design…”  (27 pgs)

Summary:

The authors of the paper purposed that designers should focus on a users’ abilities instead of their disabilities.  The authors developed seven principles to help guide designers.  The article seems to blend the previous two topics that studied how a user cognitively tackles a tasks and also breaks down the user actions into scientific data points.  The authors were also able to back the purposed seven principles by providing evidence with fourteen projects and their applications.

Reflection:

Since the “walking user interface” mentioned in the article also ties in with the Side Ruler, I’d liked to concentrate my reflection on that.

The authors stated, ” a person’s ability is not determined solely by his or her health, but also by the current environment or content.”  This is a great point, as before we studied how researchers had tested the human processing model in a lab to calculate the users reactions, but what if the researchers had studied the users reaction time as they tired to hold a baby or were in poor lighting conditions?

Yes my phone works fine while I’m seated, but when I’m walking outdoors I can’t see the display and the movement makes my error rate higher on a touchscreen.  Are there ways to solve this?  For example, the authors suggested that  could if the device is moving at a certain pace it should utilize the screen space better and make options larger to avoid errors.

This feature also confirms to the principle of commodity.  Which is not only a design principle but also a ethical one.   Commodity means that “systems may comprise low-cost, inexpensive, readily available commodity hardware and software.”   Why not adapt the existing technology that the user already has to solve a problem then developing a new one?  The solutions to ability design should be reasonable.     We can’t just develop specialized technology for the blind or for when people want to walk with their phone but consider that one solution is to adapt the existing technology that everyone already has access too.

Dix, Finlay, Abowd, Beale. Human-Computer Interaction (Chapter 10, 30pgs)

Summary:

The universal design chapter descried different ways that  different interfaces can adapt to different users populations.  For example, how sound can be used in an interface with speech recognition and speech synthesis, or fixed playback speech.  Each interface extension or attribute have different solutions  to help guide or assist the users to use the interface.  The chapter focuses not only on individual interface inputs and outputs but also on the pros and cons of each to help develop an interface that removes cultural or physical restrictions to the user.

Reflection:

This last reading seemed to tie together the previous two readings.  I had learned about universal design in Dr. Kuber’s class.  It is  interesting to see/study how many designs we use everyday were actually designed with a specific user in mind, but become common design used by all users.

One thing that I thought the authors should have brought that UD is more than just technology and interfaces but also heavily includes the users’ mental model and different levels of cognition.

Universal Design for learning takes into account not only visual design but also the mental models of the users.

Working for Pearson, an educational company, I am interested in how UD can be applied to Learning.  I think these principles are can also be applied to interface design.

For example, when designing UDL curriculum  a teacher should have multiple representations.  As a designer this should mean not only take into account and icon and text, but users previous experience and how to help users recall that knowledge.

Another concept of UDL is action and expression, where students should be given multiple ways to show they understand something and teachers can give feedback.  The same can also apply to user interface design.

It makes sense that UDL and interface design have a lot in common since both are trying to engage, teach and train different user bases using multiple approaches at the same time.

 

Week 10 (4/2) – Cognitive User Modeling

Chapter 3 Cognitive Aspects

Summary

Since HCC is an interdisciplinary field of behavioral, social, design, and technology, it’s important to understand how to fully understand cognition. The text book walks through what is cognition and how it is applied in HCC.  To fully understand our users we also need to understand how cognitive systems work and how they differ for each user.   If we understand how a users approaches problems, that can enable us to design an intuitive interfaces.

Reflection

MINI’s augmented reality  is a great example of External Cognition

I love this video and how they have overlaid the driving experiences with augmented reality. The designers had to take into account the users mental model.  What are existing ways /tools the users uses while driving?  Would it be effective if the users could see the already used GPS/back up camera on their screen?  When and how do you notify the user as not to cause a distraction an a possible accident?

It follows the three main goals of external cognition:

1) It adds gps/navigation to reduce memory load

2) It takes computational offload by have the see-though areas of the car to allow the user to stop guessing/calculating how close they are to the curb.  They also auto calculate how long it will take some where to walk vs drive.

3) Annotations and cognitive tracing by adding guide arrows through the drive/walk

Nardi. Concepts of Cognition and Consciousness: 4 Voices)

Summary

As Thaler and Sunstein talk in Nudge about how the designer attempts to use design to influence cognitive choices, Connie Nardi explains how we make those cognitive choices and the role of consciousness.  The author goes on to talk about the different approaches and paradigm that are taken when explaining consciousness and cognition through active theory.  Active theory explains our consciousness and cognitive role through the lens that both are part of our social interactions and can not be separated.

Reflection

I reread this paper twice, and though I understand the facts, I’m unclear of what the point is.

The article seemed to be a miss-match of ideas from other authors with loosely woven thoughts on both consciousness, cognition and active theory.  It’s unclear what the true point was between the different perspective of consciousness and cognition.  The author then explained how they were viewed by activity theory but never explain why it was important.

One take away from the article that I could relate to was the quote, “If you design mediating tools for others, your are also responsible, in part, for the consciousness of others.  Our tools make us who we are, says activity theory” (38).

This goes back to what Thaler and Sunstein were talking about that we are choice architects, but also tool architects.  If I make a crappy software product for kids to learn English and in turn a child can’t learn a new language since the interface is poor or the mental models used to teach English, then I am responsible for that child not learning personally.  I do agree with this and wish many others viewed their works as not only creating a product but being able to change the world.

People use things that are designed to change both there consciousness and cognitive abilities.   I don’t understand when the author states that we use tools because to “prop up our limited intellects.”  How could we continue to grow if we don’t use tools. I think the opposite that tools prove that we are smart.

Were monkeys suppose to die of starvation and not create a tool to crack a nut open? By creating a tool to provide a solution mean they were limited?  No it means they were smart.


Card, Moran, Newell. The Psychology of Human-Computer Interaction

Summary

This book gives a comprehensive and scientific explanation on how the human mind processes and reacts to information. It argues that humans use a set of memories, processes and principles to process and react to information. This can further be broken down into subsystems like a computer has a number of RAM or memory.  The authors give systemic numbers around the subsystems to reach a scientific baseline to measure the performance for humans to process and react to information. The book goes into details about how there are many different variables that affect each subsystem. For example the difference in performance between long term and short term memory and how motor skills can be broken down to analysis the overall performance of a humans ability to process information and complete a task.  Though highly technical, the studies showed that users react differently to information and different designs.  We can then in-turn use this information to understand how designing human computer interactions can affect users performance to process information and evaluated the overall usability of a system across different performance baselines.

Reflection

This chapter took a look inside the black box of consciousness and tried to develop a standardized way to evaluate cognition and reflexes. I think this gives HCC the needed application to the academic theories.  Saying that, I believe there is a place for both as in physics.  For example there are theoretical physicists that dream up ideas of black holes and missing particles and then there are applied physicists that use experiments and detailed data points to prove those theories.

HCC needs both the Nardi’s and the Human Processing Model.  This is why I like HCC as a field.  HCC combines the creative thoughts and logical thoughts.  First dreaming up a new interface/technology and also developing the new technical technology needed.

Week 9 (3/26) – Input / Output Devices Readings

Chapter 6 – Interfaces

Summary

Since technology and how we use technology is constantly changing, the different types of interfaces have been developed.  The chapter describes the history different interfaces as well as their pros/cons.

It was interesting to see the social motivation drive the user interface design as I had previously thought of it as more of designers and technologist leading the interface design changes.  Reading the chapter,  I could see the evolution and growth of interfaces as they were designed around the social push to  extend the interface from beyond the individual users experiences and grew to support more interaction other users and even the computer itself.

Reflection

I’d like to reflect specifically on Augmented and Mixed Reality Interactive Interfaces as covered in section 6.2.17.  These interfaces seems to bring together all the other interfaces as they study the ways that physical and digital, and virtual reality worlds are combined.

Below are some real world examples of each

Augmented Reality:

SnapShop Showroom- let’s you visualize actual furniture from an online store (IKEA, Create and Barrel)  in your own home.  It also always you to post pictures to friends to get their opinion.

screen568x568

Source: https://itunes.apple.com/us/app/snapshop-showroom/id373144101?mt=8

Virtual Reality:

Oculus Rift- Goggles attatch to let you move around around a virtual world

lowLatencyHeadTracking oculus_game

Source:  https://www.oculus.com/rift/

Mixed Reality:

THE LOST SPACECRAFT: Liberty Bell 7 Recovered Exhibit-

Visitors can virtually ask questions of expedition leader Curt Newport about the challenges of locating and recovering the space craft.

quest

Source: http://www.evergreenexhibitions.com/exhibits/lost_spacecraft/photopress.asp

In Mixed Reality, the exhibit has used many different interfaces to create multiple experience for users to learn about recovering of a space ship.  There are still many short comings with virtual reality, it is difficult to navigated  and often a individual experience.  To solve this problem museum designers have used physical interfaces to navigate virtual interaction that again becomes physical and projected for all to see.

These mixed reality interfaces and experiences are different than air traffic controller interfaces.  The interfaces don’t have to be deigned to be implicit of their functions.  The interfaces should be presented with some mystery as to their functions and inspire children to play and learn.

Weiser. The Computer for the 21st Century. (8 pgs)

Summary:

This article was written in 1991 and makes future predictions of the computer and its interfaces 10 years into the future.  Working at Xerox Palo Alto, a place of innovation, Weiser is not only making predictions but also helping to create new interfaces to lead the way.  The main issue that Weiser is trying to solve is to make computers more ubiquitous by developing computers with smaller interfaces that are more mobile.  To help readers visualize his vision, Weiser tells a story at the end of the article about a 21st working women and the different interface interactions she has throughout her day.

Reflection:

star-trek-predicting-the-future-since-1966

Source: http://www.globalnerdy.com/2014/01/06/mobile-technology-as-predicted-by-star-trek/

Authors have often made future predictions of technology. Weiser research and predictions about interfaces and interactions have also come true in the 21st century.

However, it is often more difficult for authors to predict the social responses and implications to technology.  Weiser stated, ” ubiquitous computers will help overcome the problem of information overload…Machines that fit the human environment instead of forcing humans to enter theirs will make using the computer as refreshing as taking a walk in the woods (89).”

I disagree that only solving the problem of making interfaces and computer use seamless with reality will also solve the problem of information overload.  I understand that making seamless interfaces will allow us to process information more efficiently since we won’t be distracted or frustrated by the interface; however, I think this will also cause information overload.

Currently devices are tied to our lives but not integrated into the background.  We can choose when to pick up our phone to answer a message or see the weather.  There is a danger with seamless integration, as with the Google glasses. that information overlay-ed onto reality and quick access to information can also cause information overload.

If we are constantly and effortless receiving information it may lose it’s value.  The quick access to information, may make it harder to understand or categorize it’s importance.  Another issue is that information overload may also lead to memory issues.  If we are not able to adapt to constant stimulus, then we may start to rely on the interface for common knowledge.  Humans have more access to information then every before.  Will the human brain also adapt to store and sort this information?  Will we have to rely on interfaces to store and sort?  As we are designing interfaces to bring, process and store information we also need to think about the implications of how our own brain perceives and process the information.

Week 6 Readings

Chapter 5 – Emotional Interaction

Sometimes it feels like I spend more time with my parents and in-laws trying to figure out their TV components.

Technology is not only apart of our lives but causes emotions and drives behavior.  How can our emotions and behavior be so driven by plastics and silicon chips just as by our relationships with others?

The chapter address the interesting emotional relationship that we have with interfaces/technology.

1) Our own emotions while using interfaces

2) The computer trying to mimic human emotions

Since mimicking true human emotions is still a time off, it often seems that when technology tries to exhibit human emotions that that are mocking us.  It just does’t seem right.  It’s almost better for technology to remain stoic and unemotional.

One real life example of this is when I was babysitting  kids they would scream when the “scary Thomas the train” came on.  I asked them why it was the scary one and they said they didn’t know, but as I looked the cars where 3D modeled and animated with real life faces that tried to mimic human emotions.  It was not quiet right, it was part human part mechanical, but signaled that something wasn’t natural.  The children rather watch the old fashion Thomas the tank engine that references more of a train and tried not to mimic humans in facial features.

It was interesting reading about the different frameworks and models that can help guide designers to creating a more pleasurable experience for the user.  Our emotions with technology is often found to last outside a one time experience and starts to become part of our lifestyle.  For example one-click buying is a technical function of Amazon, but that leads customers to purchase more items from Amazon than other online or instore places.   The pleasant emotional experience through the one-click function lead the customer to change their behaviors and Amazon becomes part of their way of life.  This way of life follows the pleasure mode. The customer receives psychical pleasure of holding the product, the social pleasure of easily facebooking your recent purchase, the psychological pleasure for the easy of one-click buying and not having the monetary exchange brought to light and the cognitive pleasure that you will receive your purchase in two days.

 

Thaler and Sunstein. Nudge: Improving decisions about health, wealth, and happiness. (pp 1-52)

The introduction and first two chapters of the book gave good insight into what drives human behavior.  I really enjoyed how the book also backed up many of their theories with evidence from experiments.

The books starts of with an introduction explaining the role of a choice architect, stating that the role has the responsibility of organizing the context in which people make decisions.  I have never heard of this term before.

The book then goes on to explain why choice architects are needed and justified. Humans are already naturally bias from our previous experiences and often our biases are not founded  in true or facts. The books states that human’s basic heuristic systems are flawed and what we perceive as facts, are often our brain making patterns out of what is really random.  The last chapter talks about temptations and humans susceptibility to it and even once yielding to temptation that humans perceive the actions differently.

I assume that the book would conclude that since humans have bias and yield to temptations that there is a need for a choice architect to help “nudge” us in the right direction.  I do believe this is true and in the end there really could never be a neutral design since we all all bias anyways.

The book also points out the impact that a choice architect has and that they should  use the rule of thumb that everything matters.  In our designs everything matters as well, and our design can have an impact whither good or bad on someones behavior.

 

Readings – Week5

Making and Breaking the Grid

This article was well written and filled with extensive information.  The article was a  “search for universal culture,”  meaning that one thing every culture and art form has in common is the use of a grid or the reference to breaking the grid.   The article takes many examples in history to explore this thesis.

I have always identified blocks with the Bauhaus movement in the graphic design; however, I did not previously know about Switzerland neutrality and it’s impact on graphic design.  It makes sense that a political neutral state would also use “reductive techniques and simplification.”  The historical overview approach that that article took helped to frame different ways in which graphic design has changed dramatically while still using the same grid principle.

As the article mentions in the beginning, Humans have been building from a grid from very early on.  The Great Pyramids were built using a grid.  The Human mind likes order, even if it’s order in chaos.  It was interesting to see that sometimes without looking at the grid overlay the graphic design seemed random but when placed on a grid it was aligned to match up.

This article tells all the great secrets.  The benefits of working with grids are “simple, clarity,efficiency, economy, and continuity.”  The example of breaking the page into parts shows how easily rearranging type on a grid can quickly create different graphic designs.

Lidwell, Holden, and Butler. “Universal Principles of Design” 

This book truly covered all the basic and I love the illustrative figures.

I’d like to point out the ones that I found the most interesting.

Law of Pragnanz- also known as the Gestalt principle, the law states that people interpreted ambiguous elements in the simplest way.  This principle is shown in the fact to that overlapping shapes are thought of two separate objects instead of one complicated new form.  This principle reminds us to use the simplest forms in our design to avoid misinterpretation and allowing the user to find meaning in useless patterns.

Ockham’s Razor- To remove unnecessary elements.  This one I’ve found to be the hardest.  As designers we may spend hours on different design elements.  It may be hard to see the necessity to remove them.   In contrary it may be easy to remove too many elements and loose the images meaning to others since we already know the meaning.   This principle is the one I wish to focus my attention to this semester.

Picture Superiority-  Are there ways to remove text and present the information in a picture or icon?  This way the viewer is more likely to remember the image.  This is most important when building a company or brand.  Think of how Nike uses the check sign and the brand is still easily recognized by millions.  Another example of picture superiority is Starbucks re-banding where they only have their image and no longer have their name in their logo.

Week 2 Readings

Bannon, L. “Reimagining HCI” 

This was a great recap of what HCC, HCI, and HCD, from where it started out , how it evolved and where it is going in the future. There were a lot of meaningful quotes through out the article.  A great quote from the author about the begin of human/technology relationship was the “man-machine fit often seemed to fit the person to the machine, rather than viceversa.”  This was a very valid point, and one that still seems to be prevalent.  Instead of thinking about the users and how he would use the machine, technology is often built and then we spend hours training the users. HCD was developed to help solve this problem.  In this way we can stop blaming the human, and start blaming the process and design of the system.

Another great quote is

“…the design role is the construction of the ‘interspace’ in which people live, rather than an ‘interface’ with which they interact…” —Terry Winograd

This is prevalent and seen in the way that our technology/products don’t just serve as only one use or live in one place.  Today apple has built not only a product of a phone, but a lifestyle that encompasses the way we interact with the world and is a constant component of our life for multiple uses.

Pruitt, J. and Grudin, J. “Personas: Practice and Theory”

I enjoyed that this article not only covered theory but presented a cases student with the challenge and findings when working with Personas.  Some key points about Personas:

-They help with communication , not only with the users but with entire team building the product.

– Building personas is not easy and it’s best to use multiple methods and data gather Technics such as: using existing markets ,  research papers

-Gather a list of key attributes that tie to the persona through data

-Run sanity checks on the product using the persona

I’ve used personas before, but only in the beginning of the life cycle.  I found it interesting how the Microsoft team used the personas not only to build and validate their product but also to keep the development and other team members engaged and focused.  Another key take away from this article was about how personas can help the team look at the product and its benefits for multiple users at the same time to compare them.

Kolko, J. “Thoughts on Interaction Design” 

I identified with the author and the way that use cases are created in the work place.  They can be very helpful in discovering unknown requirements, but it is still very difficult to create something for a user who is not like you.

Since design can also lead to many interpretation leaps, it was interesting to learn about ethnographic tools that can help explain what people do and why they do it.  The author did a good job at breaking down the lifecyle of creating the product and explaining how the different ethnographic tools can be applied.

The article also brought clarity to the different roles and responsibilities that a interactive designer has throughout the entire lifecycle.  That the interactive designer is not someone who comes in at the end but plays a key role capturing the product as a whole at each phase.

Week 1 Reading – The Design of Everyday Things

Everyone has a unique and different prospective of the world.  This has lead to great innovation and improvements to the human condition.  This has also lead to confusion and terrible designed products.   The way that I view a design of the product might not be the say way that it actual works and it may not be the say as the users’s perspective.

I enjoyed reading the chapters and loved how the author simplified good design by concluding that there are three different conceptual models, the designer, the system, and the user.  All three must align and not contradict each other to design a usable product.  Good design also does not end with a 2 or 3 dimensional representation, but also may include mapping and feedback.  Mapping means the relation between to objects.  We must also think of the environment that the product is in and its relationship to  other objects.  Feedback is also critical and often a forgotten piece of good design.   Feedback gives the user information that the action being performed is done and can included additional feedback wither the action is correct or incorrect.

One real world example of feedback is that Toyota had to add noise to the engine of a Prius C.  The car was so quiet, which at first may seem like a good thing, but without providing feedback to the user and other surrounding people, the user could easily think either the car is not running or the surround people can be endangered since they may walk right into oncoming traffic.

The book was very well written.  It chunked information into simple categories that were easy to understand and provided detail images.