by Donald A. Norman
In this chapter, Norman talks about "Natural Interaction" and how our machines should interact with us in a way that is natural to their operation. For example, a kettle of boiling water can be heard audibly as it gets hotter, and as the steam slowly makes it's way out. Then, whenever the water is boiling, the air is forced through a small hole producing a whistling sound; this makes sense to the user because boiling water releases steam. But, when you're using a microwave or the dishwasher, a loud beeping noise isn't a communication in a natural way. This arbitrary beep isn't really related to the natural act of heating up food, and unless you have experience with the different tones of the machines, you wouldn't know which appliance had beeped. Another concept Norman addresses is "affordances," and this concept goes hand-in-hand with natural interaction. An affordance is a way that we can interact with an object in this world, and an object "affords" an interaction because it makes sense to us in some subconscious way. For example, a doorknob "affords" turning, and a button "affords" pushing; in this way, we know exactly how to physically interact with the machine / object even if we haven't seen it before. Norman suggests that future machines should not only have natural ways to interact and communicate with us but should also have natural affordances so they make sense to use. In this way, machines can give us information by interacting with us physically; if we're going too fast, the steering wheel in a car can push back at us or tighten the seat belts.
Towards the end of the chapter, Norman talks about the perceptions that humans and machines have of each other. With the new suggestive systems that are always trying to guess what we're thinking or predict our actions, the machine's actions now become predictable. If we assume that they're going to act in a way that reflects our interests, we could be wrong whenever the machine has predicted incorrectly. And this could be dangerous to humans. Norman says that machines should be predicable, because humans will never always act predictable. Instead of trying to guess what we want and doing that, the machine should follow a set course and always let us know what is happening through a "playbook." This playbook should explain how the machine is working, and why it made the decisions that it made. It could be presented as a video showing the steps of a process while it's happening, or through natural interactions and sounds coming from the operation of the machine.
For once, I actually kinda agree with Norman on the first part of this chapter. The beeps that come from our machines are arbitrary and don't make sense with the operation that is happening. But, when I can hear the water moving and washing the dishes in the dishwasher or clothes in the washer, I know what is happening and how it's working, and that's how I judge when the operation will be done. I don't agree with his idea of a car that has physical feedback. Maybe in the older days when that kind of feedback was normal in society, but today, the people have already gotten used to the interactions of technology. We shouldn't reinvent the computer to be more natural, because that would be confusing for those who grew up using the new keyboard and mouse design. And this applies to many different technologies - even though the interaction might not be natural or the sounds aren't natural, this doesn't mean we need to make it that way, especially if most of the population already is used to using the product in the new interaction paradigm. The only time that natural interaction should be used in a new system is if the product is completely new, and no one in the population has ever used it before. For example, if jet packs became traditionally used, and I was using one for the first time, I wouldn't want it to beep at me arbitrarily. Does that mean that I'm about to fall out of the sky, or just that I'm doing a good job piloting? I want it to explain to me in some natural way if there's danger. But as I said, this only applies to completely new technologies where there already isn't a learned interaction paradigm. With existing systems, you're just going to confuse people and make them mad if you change the way they interact with their machines.

No comments:
Post a Comment