Sunday, January 30, 2011

The Design of Future Things, Chapter 2

The Design of Future Things, Chapter 2
by Donald A. Norman

     In Chapter 2 of The Design of Future Things, Norman delves into the topic of the psychology between man and the technology we create. As our machines and technology become more sophisticated, the interaction between humans and those technologies are more important. Norman divides the human brain's processing into three categories: visceral, or instincts, such as when we recoil from a hot stove; behavioral, which includes basic motor skills, tasks, and learned procedures; and reflective, which includes self-image, higher reasoning, and higher consciousness. Our technologies today serve to take away some of the necessary processing, and we interact with them in these ways. Our car, for example, may take away the visceral reaction to a bump in the road by cushioning the tires with shocks. We interact with the car behaviorally by turning the wheel and pressing the pedals. But, our car does not communicate with us reflectively - that is, we can't have a conscious decision with our car.
    The reason why communication between humans and machines isn't satisfying or completely possible is because machines and humans have no common ground together. Machines may have lots of sophisticated reasoning systems and sensors built into them, but they can never compare to the complex psychological system that humans have for making decisions and communication. So, the future of smart refrigerators that caution you from eating unhealthy food or the car that programs a scenic route for you is not readily possible. Norman explains that systems that just "do" or just "demand" from you will never be successful in interactions with humans. Instead, we need our machines to suggest to us, in a more conversation-like manner, and explain themselves.


     While Norman's points make sense, I don't think that machines need to learn to be polite and explain all their actions to us. One of the benefits of technology is that it does the things we need it to, and we don't have to understand how it works. I don't see our technology making executive decisions to us and not allowing us to eat an egg, for example. If our machines are going to tell us anything at all, it needs to be a suggestion, and they need to understand that humans are always the authority. In this, I agree with Norman. But, I wonder if we really need all of our technology to start suggesting things. I have never used a media suggestor that gives suggestions based on what music or movies you already like, and I don't think I need a fridge that tells me I'm drinking too much or a car that yells at me for going too fast. While it would be nice to have automated cars, and other automated things, I don't want the reflective side automated. I want to be able to make all the choices about where we're going to go, and the route, and let the car do all the visceral and behavioral things like steering and breaking.

Extreme Programming Installed, Chapters 4 - 6

Extreme Programming Installed, Chapters 4 - 6
by Ron Jeffries, Ann Anderson, and Chet Hendrickson

      In Chapters 4 - 6 of Extreme Programming Installed, the authors present the idea that is integral to Extreme Programming and Agile Programming development practices: the user story. A user story is a description of something that the user should be able to do, which is usually called a feature. Each user story represents some programming work that should be done, and should specify the inputs to the procedure, the correct outputs, or the options / things that the user can do with it. When the design is done in user stories, there are more incremental updates to the product with new features being added. The benefit is that there is always a working product to show to a customer.
     At the beginning of the planning process, the customers write all the user stories on index cards, and give them to the programmer. The programmers look at each one, ask for clarification if needed, split a user story into multiple ones if it's too large, and estimate how long it will take to do. The estimation is done in "points," which each represent a "perfect engineering week." A perfect engineering week is the amount of work you could get done if you were allowed to program with no interruptions for a week. So if a user story is given 2 points, it means that it should take about 2 weeks to finish.
     The customer is responsible for specifying acceptance tests for each of the user stories; in this way, the programmer knows exactly what functionality must be there and knows when they're done programming for the story. In Extreme Programming, testing is not a phase that is done at the very end of the development process. Instead, automated tests are built that are constantly run. This is so that errors can be caught after a programmer has just finished coding the section, making it easier to remember what changes you made and where the error might be.


     I really liked this section because it perfectly describes the way my group worked when I interned this summer. We made user stories and tasks for the user stories (online though, not with note cards) and we estimated points for each user story. This really helped keep the group organized and let us all know exactly what work we needed to do, and the time frame for getting it done. And although writing automated tests and tools can be time-consuming, and some programmers may have that as their only job, I know from experience that it's worth finding errors as soon as they're made. I can't tell you how many times that I've forgotten how I coded something in a school project and had no idea what to change to fix it.

Sunday, January 23, 2011

Extreme Programming Installed, Chapters 1 - 3

Extreme Programming Installed, Chapters 1 - 3
by Ron Jeffries, Ann Anderson, and Chet Hendrickson

      In the first three chapters of Extreme Programming Installed, three different personas in the programming process are introduced. They are the Customer, the Programmer, and the Manager. Each of these people have a specific job and duty, but the core idea of Extreme Programming is communication. The Customer is in charge of communicating specifications for the software and being always available to everyone. The Programmer is in charge of implementing the specifications that the Customer provides and going to the Customer to get clarification on requirements. Lastly, the Manager is in charge of doing anything possibly to make the Customer and Programmer's jobs easier and faster, and to foster communication between different members on the team. Everyone in this team is equal and is working towards the same goal.
      The Extreme Programming paradigm emphasizes easy communication. The customer is not an abstract person that the product is being developed for but instead a physical person that is on-site (preferably) and is heavily involved in the development process. As in the diagram below, the Customer writes stories about features that are needed, and the Programmer implements those stories. Open communication will allow obstacles that come up to be overcome quickly because the programmer can quickly get feedback from the Customer - which is made a lot easier if the Customer is already on-site.

      I felt that a lot of the ideas in the first 3 chapters of Extreme Programming Installed were very valid. This past summer I interned at a major software company, and we used a variation of Extreme Programming. What made collaboration, programming, and communication so easy was the feeling of equality. Everyone was equal, whether they were the software manager, project manager, programmer, intern, or customer representative. That way, when there was any ambiguity or questions, that person could easily get an answer instead of guessing at the code, as they describe in Chapter 3. And the emphasis on physical and oral communication instead of email or phone calls is also correct; when it's not possible to walk to the person's office and ask a question, most the time that communication will not happen, leading to problems in the development process.

Saturday, January 22, 2011

The Design of Future Things: Chapter 1

The Design of Future Things, Chapter 1
by Donald A. Norman


       The first chapter of The Design of Future Things presents Norman's basic argument for the entirety of the book: machines and future technology must get better at communication with humans and learn to understand their limitations and when to relinquish control. Norman argues that true "artificial intelligence" is not possible in the near future. Current systems are not intelligent but are instead a bank of possible outcomes that designers have programmed the system's reaction to. But, it is not possible for us to program all possible outcomes - we will always forget at least one. Norman says that instead of programming each possible solution, we need to program our machines and technology to listen to us and react with better communication, acting in a "symbiotic relationship." If machines can recognize what they're good at but also recognize their limitations, then we can take advantage of technological advances without the risk of being controlled or overruled by the decisions that our creations make for us.





      The first chapter of this book was very interesting because it raises some interesting questions about the growth of technology, specifically: "What happens when technology thinks that it's smarter than us?".  While we have all of these new versions of traditional technology: cars that can sense other vehicles, washing machines that can detect the size of the load, and recommendation systems that claim to know our preferences, what keeps these technologies from overruling the decisions that we make? I agree with Norman in that we need to have limitations on the authority of our technological devices, thus having them "recognize" that if a human turns a feature off or makes a decision that they must know best. The problem is that human interactions are so subtle and different between cultures, situations, and people that it is impossible to accurately program this in. There is no way to completely cover all your bases, and also impossible to program a device that can make decisions in a human way. While Norman agrees with this and says that such innovations are many years away, it is hard for me to imagine a time like that at all, no matter how far in the future.

Tuesday, January 18, 2011

Introduction!


What is your name?
Aaron Loveall

What will you be doing after graduation?
Working for Cisco Systems in Richardson, TX

List your computing interests (HCI, information retrieval, databases, etc.)
I'm definitely interested in HCI, haptic and touch screen systems, and gesture and motion control. I'm also interested in mobile platforms, entertainment and media distribution, and gaming.

List your computing strengths (a language, focus area, etc.)
Extremely fluent in Java and C++. Also have LOTS of experience coding for android phones and also some experience in iPhone development. I am very good at debugging code, and usually can figure out a solution to a problem by sitting and coding in large blocks of time.

What was your favorite computer science project that you worked on and why?
My favorite computer science project that I worked on was the final project for CHI. We worked on an android app that was used to review flash cards; you could add, edit, and delete flashcards on a remote website that communicated with the android phone and had an easy touch-screen swiping interface to view the cards. It gave me a lot of insight into how the phone worked, and definitely helped with my android coding experience.

What was your least favorite and why?
My least favorite CS project I worked on was the final project for CSCE 441: Computer Graphics. It involved taking motion capture data and interpolating it using a system that we were just given with no explanation. It wasn't that I didn't like the subject of the programming - I actually did a lot - but we were given this system that had 10,000+ lines of code and didn't make any sense. It would have taken weeks to go through the code and understand how it worked so I just didn't do the project.

What do you see as the top tech. development of the last 5 years and why?
The top technical development of the last 5 years was taking the "computer" and all of it's typical features and putting them in our pockets. Now the iPhone and Android devices have most of the features that personal computers had 5 - 10 years ago, and are even faster and more intuitive to use than before. These devices have so much strength and potential with their app stores that a normal person could get by with just one of them (for email, calling, internet browsing, videos and music) and wouldn't even need a real computer.

Provide some insight into your management/coding styles. This could include your preferred coding method, how you use line breaks, what time of day you work best, or any other relevant programming-related facts
I code best by myself, and like to have a lot of control over the organization and structure of the code. I comment a lot, and make sure that my code is always well-formatted and legible because I am a little OCD and like things to be neat. I also have a hard time just sitting down, getting into the coding, and coding for just a short period of time; but if I'm in the right mood, I get most of my work done by sitting down and programming for straight 5+ hours, if I'm in the right environment (can't be too quiet, but things have to be happening around me).

Make sure to include a picture of yourself: