Monday, April 25, 2011

The Mythical Man-Month, Chapters 18 - 19

The Mythical Man-Month, Chapters 18 - 19
by Frederick P. Brooks, Jr.

       In Chapter 18 of The Mythical Man-Month, Brooks Jr. provides a summary of all of the major ideas of his book, which I will summarize below:

  • Chapter 1: Programming is rewarding because it is a creative task and involves critical thinking, but also is painstaking and can become obsolete quickly. Programming for a system takes much more time than programming a single component.
  • Chapter 2: The main reason programming projects go wrong is because of calendar time and scheduling issues; the man-month shouldn't be used for measurement and is misleading.
  • Chapter 3: A small team that resembles a surgical team is best, including a head programmer and a small group of others that assist him (editor, tool builder, documentation writer, etc).
  • Chapter 4: Your design much have conceptual integrity, and the best way to ensure that is to have a small number of heads involved in the design.
  • Chapter 5: The tendency in designing a second system is to over-design it. Instead, try to fix the most important issues without becoming ambitious and redesigning the whole thing.
  • Chapter 6: The architecture must be completely specified, both formally and prose-like, and needs to be in the hands of everyone on the team.
  • Chapter 7: Communication is very important, and must be done both formally and informally. A product workbook is essential to have all documentation in one place.
  • Chapter 8: Estimate your tasks individually and not as a whole, because some tasks are more complex than others. Using a high-level language increases productivity.
  • Chapter 9: Representation is the most important thing in programming. When designing to conserve memory, have standards but do not reduce functionality because of this standard.
  • Chapter 10: A small base of documents should be the manager's best tool, but is used only sparingly; most of his time is spent out communicating with the team.
  • Chapter 11: Design to throw one away, because you learn through doing and not just planning.
  • Chapter 12: Keep one person to build tools, and choose wisely, because tools directly influence productivity and efficiency.
  • Chapter 13: Test each part individually before putting them together. Design top-down and continually define and reduce abstraction until you can start coding.
  • Chapter 14: Always keep track of schedule, so you can catch schedule slippage and do something about it as soon as possible.
  • Chapter 15: When writing documentation the user will see, make sure it's technical enough but is written in easy-to-understand prose.
       In Chapter 19, Brooks talks about the effect that The Mythical Man-Month has had over twenty years, and how the ideas inside have changed with the times. He identifies the issues that affect software development and that are relevant no matter how much technology has changed:
  • Conceptual Integrity: When the product is designed, it must have a conceptual image that pervades the whole system. This is important for the programming team because it gives a basis for everything that is included, including features and user interface. This is also important for the user because it lets them know the system has a clear use, has clearly-defined features, and that everything included has its own specified place in the system. The example Brooks gives is the WIMP interface, or "Windows, Icons, Menus, Pointing Device," which is the user interface system that Mac and PC computers still use today.
  • Featuritis: Software tends to have too much functionality added on. The easiest to use program is one that has a clear use, and doesn't try to do everything or be too ambitious. Programmers are always trying to make the system more useful for users, but often adding too many features makes the program unwieldy and actually less useful because users don't understand the software.
  • Incremental Design: The system should be designed in incremental parts, adding functionality each time. When this happens, there is always a deliverable to show to a customer, and could always be released early if funds run out (albeit missing some features). It also makes testing and debugging easier because only a few changes are made in code at one time, and you have less of a place you have to look for problems.
  • Shrink-Wrapped Software: Nowadays, there are SDKs, libraries, and standards for programming languages. This helps with standardization of interfaces and allows programmers to reuse code. Code reuse makes the system easier to build and understand, and allows the programmers more time for testing, user studies, and means that the software gets to the user faster.
Although the examples in the book are outdated, many of the issues explained still exist today and the solutions given are still viable and relevant solutions for this programming problems.



       In all, I thought this book was very interesting because of how similar it was to Extreme Programming Installed.  Even though Brooks Jr. was working on operating system and computer software before GUIs and window-based interfaces were common, he understood the problems inherent in scheduling software projects, communication, documentation, design, testing, and debugging. I'm pretty sure that many of these ideas inspired Agile development and will continue to inspire revolutions in software engineering paradigms in the future.

Friday, April 22, 2011

The Mythical Man-Month, Chapters 16 - 17

The Mythical Man-Month, Chapters 16 - 17
by Frederick P. Brooks, Jr.

       In Chapter 16 of The Mythical Man-Month, Brooks Jr. talks about the "silver bullet." Computer hardware continues to grow by leaps and bounds, but software is unable to grow at the same speed. Software projects continually get behind schedule, are not as efficient, and never make the deadline. Brooks argues that there is no silver bullet; "there is no single development, in either technology or management technique, which by itself promises even one order of magnitude improvement in productivity, in reliability, in simplicity." There are two types of difficulties in software development that plague projects: essential difficulties (those inherent to the problem itself, such as algorithms or development techniques) and accidental difficulties (those that concern the development process, such as development machines, machine time, and programming language). Major essential difficulties include the complexity of software as compared to hardware, conformity (that software products might be controlled by other parties), changeability, and invisibility (that the product is not easily visualizable). Many of the accidental difficulties have actually been resolved through developments in technology, through high-level programming languages, time sharing of machines, object-oriented programming, and artificial intelligence.  While many advances have helped with the essential difficulties, such as rapid prototyping and incremental development, there is yet to be a silver bullet that allows for a large growth in software development efficiency.
       In Chapter 17, Brooks refines his original statement about the silver bullet with an update almost ten years later. After critics have issued rebuttals to his theory, he has decided that there actually is a silver bullet. First of all, visualization techniques and diagramming heavily help the process. But, the most important thing is to focus on quality of software instead of just productivity in the software development process. When you focus on quality, you learn how to produce great software and find techniques and shortcuts that end up helping the productivity of the team.  This whole system in called the "Vanilla Framework." In all, although a lot of things have been developed to help software development be more efficient, but we shouldn't sit around waiting for a major breakthrough. Instead, we should focus on the incremental upgrades that can be made and continue to look for small ways to be more efficient.



      I think for the most part that I agree with Brooks about software development and efficiency. It's easy to continue to make advances in the hardware for a system, but hard to have a large leap in efficiency in a software product. Because of the complexity of the system and how software is always changing (to keep up with the new hardware), it's hard to keep up. I definitely agree more with Brooks' revision of "No Silver Bullet," in that instead of focusing on large-scale order of magnitude developments, we should focus on the small improvements that can be made. This includes higher-level languages, new debugging techniques, version control, more useful and complex IDEs, web development, and new forms of wireless and touch-based computing. If we continue to incrementally upgrade our software development tools and methodologies, we can still see great improvements over the development processes of the past.

Monday, April 11, 2011

The Mythical Man-Month, Chapters 13 - 15

The Mythical Man-Month, Chapters 13 - 15
by Frederick P. Brooks, Jr.

       In Chapter 13 of The Mythical Man-Month, Brooks Jr. talks about building a program to work; putting all the little pieces together, testing and debugging, and integrating into the larger system. The first important step is to make sure your design and specification is "bug-proof." Everything major in the project should be defined clearly, and should be looked at by a third party to find errors and gaps that might be there. Some of the hardest errors to find are those when programmers of interacting systems assume different things about the code, whether it be the structure of the class or the way a message is formed. Top-down design is also very important; this procedure starts with high-level abstractions and sketches, and continues to refine the design down to the level needed for the programmers and implementers. Lastly, debugging is really important during the programming process. Whether it's done on-machine, in a batch setting, through memory dumps or snapshots, or interactively, careful attention should be made to debug as you go. When building a larger system, make sure that the smaller components have been debugged individually so we can rule out component-specific bugs.
       In Chapter 14, he addresses the issue of schedule slippage; why projects can slowly get behind schedule, and tools to keep this from happening. The most important thing of a programming project is it's schedule. In that schedule, we have a list of milestones, with concrete dates. These milestones must be completely specified so not a single person can argue about what it means, and have measurable events and a specific date they must done by. The more specific it is, the harder it is to lie and say that it's finished. PERT charts are also important because they allow you to lay out the schedule, with probabilities that events will be finished by the due date. It will also give you a concrete diagram of the dependencies between different parts of the system. Another problem is "the rug," meaning that whenever a small team slips behind, it won't always come to the attention of the main boss. Even though that manager may think they can fix the problem, this should always be told to the boss because they can use contingency plans to fix the issues. Lastly, the PERT chart should be used to make reviews throughout the project and find issues of schedule slippage as soon as they happen.
       In Chapter 15, Brooks Jr. emphasizes the importance of good documentation, and explains the most important documentation needed for a software product. First of all, the user needs an easy to read description of the program, including the purpose of the software, what kind of input it takes, the format of the output, different options the user has, exactly how to do those options, and how to make sure the program is working correctly. The user also needs some documentation that helps them believe the program works correctly, including examples that they can run, test cases, etc, that show different inputs and their corresponding outputs. When something goes wrong with the program, the user needs a description on how to modify and fix the issue. Usually, full detail for the user is required, which could include a flow chart, descriptions of the algorithm used, or an explanation of the file structure of the program. He spends the rest of the chapter talking about self-documenting programs, which really just means "comments." These comments in the code allow for the programmer to analyze the code and also see the documentation at the same time, not having to worry about looking back and forth between code and another document.



      Again, the stuff that Brooks has to say makes sense, but is kind of obsolete in our day. In his time, this information on high-level languages, interactive debuggers, and comments inline in source code must have seemed radical but extremely useful. It is interesting to read this book as a history lesson, because it accurately describes the techniques used before, and the modern-day solutions to those problems. Comments in source code definitely help, and having someone check over your design to check for gaps before implementing is definitely a good idea. I think the most important thing was the "top-down design" approach. It definitely helps to get a good picture of exactly what you want your code to do before bothering yourself with possible implementation strategies and how you will test the code.

Tuesday, April 5, 2011

The Mythical Man-Month, Chapters 10 - 12

The Mythical Man-Month, Chapters 10 - 12
by Frederick P. Brooks, Jr.

       In Chapter 10 of The Mythical Man-Month, Brooks Jr. addresses the issue of how much documentation that a manager should keep for themselves, and what important documents are needed. The concerns of any manager are the "what", "when", "how much", "where", and "who". In a software development setting, these correspond to objectives of the software and produce specifications, schedule, budgeting, space allocation, and an organization chart that assigns tasks to different programmers and managers. It's important to have these kinds of documents because writing down helps inconsistencies and gaps in the design process to come out. Those documents also serve as a way to communicate to everyone on the team the decisions that have been made about these issues, because oral communication and emails may not have circulated around to everyone. But, the manager must realize that these documents are only about 20% of the management tasks, and that the other 80% is spent guiding and encouraging the programmers on the team.
       In Chapter 11, Brooks Jr. talks about the very first part of the software project that is built. Usually, after some design and meetings, the programmers start working on the project and writing code. Algorithms are proposed and refined, and they find out that the functionality that they wrote works and is ready to be written into a larger-scale program. Brooks says that the teams should always "plan to throw one away." This means that whenever you first start programming, you should have in mind that this first release is tentative and will get thrown away. Then, you can start over with a better idea of what you want to accomplish, and know what works and what doesn't. If you stick with that first draft, you'll end up releasing a slow, buggy, and hard to use system to the users and end up having to redesign the product anyway. Brooks talks about how each system starts out "taking two steps forward and one step back." This means that continued releases add functionality but are hindered just a bit by bug fixes. But after users get more familiar with the program, more and more bugs are found, and the system is just taking "one step forward and one step back." This means that no matter what functionality you add, the bug fixes end up destroying that functionality and the structure of the system. This is why you should throw away the first draft and start over, and not waste time upgrading software that is essentially already dead.
       In Chapter 12, he talks about the tools that a programming team will use, and how their organization and selection can greatly increase the efficiency of the project. First, he says that each programmer shouldn't use their own set of tools, because they hamper communication and can cause differences in the code produced. As each team has different things they're working on, specialized tools per team are necessary, with a toolmaker designated per team that decides what tools to use and trains the team in how to use them. One important tool is the target machine and the vehicle machine for the program. The target machine is what the product will be run on, and the vehicle machine is the machine that the development will be done on. Programming libraries, source control, text-editors for documentation, and the language used are all important tools vital to the success of the project.



     Again, a lot of the ideas in this book are fairly outdated. In keeping with the XP tradition, I do like that Brooks proposes a smaller set of documents instead of the giant product book that he talked about in earlier chapters. This way you can spend more time programming and less time making changes in all of the documentation when design issues come up. I agree that the first draft of a product should usually be thrown out, and it's always good to start again. The problem is that whenever you work that hard on a project, it's hard to throw that away, and also hard to start a new project without doing the exact same work that you just finished. Lastly, he spends a lot of time talking about scheduling for machines, which isn't as relevant today. Maybe there are still groups that do research on supercomputers, but today usually each person has their own computer to program on and possibly their own server to test on as well.

Tuesday, March 29, 2011

The Mythical Man-Month, Chapters 7 - 9


The Mythical Man-Month, Chapters 7 - 9
by Frederick P. Brooks, Jr.

     In Chapter 7 of The Mythical Man-Month, Brooks talks about effective communication within a programming team. He gives the example of the Tower of Babel, in which the workers had leadership, the materials, and sound engineering design yet failed because they couldn't communicate with each other. In a programming project, it's important to have many types of communication, including informal (on the telephone, walking over to their cubicle), meetings (which should be daily), and through documentation. Brooks suggests that the project use a "Project Workbook" as the primary method for the documentation. This workbook contains all of the documentation that was ever produced for the project, allowing for daily changes and displayed so that the programmers can see what changes were made and a description of the changes. He also talked about different organizational roles within the project, and the two most important: the producer (who handles assigning work, making sure everything is getting done, tracking, etc) and the technical director (makes executive-level decisions about technical aspects of the system). In small projects,  both of these roles can be the same person. In really large projects with large teams, it's best to let the producer be in charge, and allow the technical director to be out of the way of management roles, while still having authority over decisions. In smaller teams, it's useful to let the technical director be in charge, and have the producer answer to them.
     Chapter 8 talks about estimating the time to finish a programming task, and general programming productivity. Brooks in his experience has found that using estimates for smaller programming task is ineffective whenever estimating for a large project. For example, if a single programmer estimates that the task will take about 2 weeks, they will almost always need 4 weeks, or double the time, to finish the task. This is because not all of your working time is spent programming; the programming portion of the task will still take 2 weeks as estimated, but the other 2 weeks is spent in machine downtime, sickness, meetings, paperwork, setting up system, and the like. Brooks also presents more data about the relationship between programming complexity and the time to complete the task. As expected, more complex problems that required interactions between program modules and between different programmers took longer. And, in the last cited study, programming in a higher-level language versus machine code made the programmers five times as productive.
     Chapter 9 talks about program space, its association of cost, and how to reduce the space you use when building a program. The size of a program is directly related to cost; whether it is the "memory-rental" cost per month to run a system in the past, or the cost of memory in the user's computers that the program will be run on, size plays a factor. Size inherently isn't bad, but unnecessary size is bad. When designing a software product, there should be space budgets built in. Not just for the core and the operating system, or the core memory that the program will use, but instead budgets for the disk, for the cache, for the stack, and everything included. It's also important to look at the features required for a section of code or module. If you specify that a module is going to be very big, you also need to have an exact specification of everything that the module will do. Usually, you'll find some memory that you're using that is not necessary to meet these goals. And lastly, you need to look at the representation of your data. Usually, you can find things that could be stored more efficiently using a different data structure, or that you store the same data more than once.



     I agree with Brooks when he talks about estimation; usually when you estimate two weeks, you don't take into account the time for breaks, for time off, for meetings, and for dealing with problems in the development system. When Brooks explains techniques to reduce space, it just doesn't seem as relevant today. Now that all systems have gigabytes and more of memory, it's hard to spend all your time making everything memory efficient. Yes, faster algorithms run faster on slower machines than do slow algorithms on faster machines, but we have so much computing power today that it's not as relevant. And lastly, I do not agree with the Project Workbook. Seems like too much effort to keep all of the changes, etc when you are constantly making those changes. Something like the XP method of programming first and designing later would be more effective.

Thursday, March 24, 2011

Extreme Programming Installed, Chapters 22 - 24

Extreme Programming Installed, Chapters 22 - 24
by Ron Jeffries, Ann Anderson, and Chet Hendrickson

     Chapter 22 of Extreme Programming Installed is about defects and bugs in your code. The most important thing about finding defects is the way they are reported; whether you include an automated defect tool within the software or allow the customer to email them, they need to be quickly communicated and dealt with. There is also the issue of priority. If the defect is a minor fix, it can be written on a card and scheduled for the next iteration as if it was a regular user story. If the defect is a little more serious, the customer and programming team need to coordinate about priority. When a programmer is fixing a defect, they're not working on new code for the iteration, and so a few small features may need to be dropped to accommodate this new time requirement. It's also really important to try to prevent future defects. If a defect makes it to the customer, that means that it made it past the programmers' unit tests and also past the customer's acceptance tests. Examine your current tests and write a test that will make the defect appear. Then fix the defect, and now you know that future defects related to this defect will be caught before they make it to the customer.
     Chapter 23 summarizes all of the concepts of Extreme Programming that make software projects more efficient and on time. XP stresses communication between programmers, managers, and customers, and mandates that a customer be on-site for in-person communication. Testing is a heavy part of the development process, and any code that is going to be released must pass all of the unit tests 100%. Programmers code in pairs, with one person working on the code and the other watching over their shoulder and asking questions to make sure they understand the meaning and also catch as many errors as possible. Planning should be minimal at the beginning, and just consist of taking features and converting them into user stories, and making initial estimates. XP stresses iterative design and programming where you are constantly tracking your project and updating design paradigms and coding estimates along the way.
     Chapter 24 is about how to communicate to the customer and manager about the programmer's estimates. A lot of the time, management doesn't completely understand the difficulty and complexity of the software that is being built. As the authors explain, the programmers are usually forced to say "I'll try," and then end up working overtime and stressing themselves out to reach the deadline. Usually, even with all the hard work and stress, the deadline still isn't made. Instead, you should sit down with your customer and managers and explain the estimates. Explain that this is the amount of points you team can finish in a week, show how many points each task is estimated, and then allow them to prioritize and remove some less important stories from the iteration. Although they'll probably be frustrated and mad, you now will have a reachable deadline and can work towards that without stressing the whole team and managers out.



     In all, I thought that Extreme Programming Installed was a very interesting book, and seemed to address many of the issues with programming and software development paradigms used today. I agree that design should be done along the way, because as you work on a software project you learn more about possibly implementations and get better estimates along the way. I also like the idea of open communication - waiting for an email from someone else will just hold you back and keep you from continuing your work on important issues. It's a lot easier to just walk down the hallway and get a clear answer right away so you can continue your work. Extreme Programming, if adopted by more development teams, would mean that we haves more useful software on the market that meets deadlines and has less defects and bugs, all while allowing the programming team and managers to do their job efficiently, without stress, and enjoy it along the way.

Wednesday, March 23, 2011

The Mythical Man-Month, Chapters 4 - 6

The Mythical Man-Month, Chapters 4 - 6
by Frederick P. Jones, Jr.

     In Chapter 4 of The Mythical Man-Month, the author presents the concept of "conceptual integrity". This means that your system should be of one unifying design principle, and not just a lump of varying functionality. Conceptual integrity is extremely important to ease-of-use, and it is better to leave out some functionality than sacrifice this. It is also important to separate "architecture" of the system from the "implementation" of the parts. The ones that ensure the integrity of the system and it's design are the architects. It should be a small group of people so as to keep the ideas one of track, and they should not bother themselves with how each section might be implemented. If too many of the programmers who are implementing the system get involved in the discussion of the architecture, not enough time is spent on the implementation, which means the system might not be built as efficiently. The programmers still get to have a part in the creative process because they can decide the implementation process and come up with elegant and efficient solutions to the specifications given by the architects.
     In Chapter 5, the authors address the issue of the 'second system effect'. Usually, the first system that a group of programmers design is simple, efficient, and functional, because they have made sure they kept conceptual integrity. But, when designing the second system, they try to fix all the mistakes that they made in the first system. The problem with this is that it usually results in feature overload. Even though the new system will be more functional and useful to the users, it is usually an unwieldy monster that feels bloated and inelegant. Another problem is that there might be a lot of focus on a feature that wasn't quite right in the first iteration, even though that feature is outdated. For example, the programmers might spend a lot of time making sure that the FIFO scheduler of the operating system is efficient and uses less memory, when they should have spent time working on the next generation of scheduling algorithms that would be faster and more efficient.
     In Chapter 6, they talk about how it's possible to ensure conceptual integrity when you may have one hundred different programmers working on the project. The important thing that the architect releases is the manual; this document must lay out all of the functionality that will be provided, including everything the user will see, but also must not provide any of the implementation details and leave that to the programmers. The architect can also specify the system using a formal definition, most usually described with the implementation. The problem with this is that sometimes it will over specify how things work in the system. And, sometimes defining by implementation means that developers view errors and bugs in the system as defined and standard output, causing problems with previously written programs when you update and patch the system. Meetings are also very important, but should have a structure and formal method for communicating and resolving disputes. Keeping a telephone log of all questions to the architect and their responses provides a history to look up whenever there are design issues again.

     I didn't like this section as much as the first three chapters. It talked a lot about conceptual integrity, system architecture and design, and communicating these design paradigms to each member of the programming group. While the ideas are good, and it is important to have the architect formally define the design so you haves coherence, I think the design paradigm in XP is a much better process. Design should be minimal at the get-go, because usually it's not possible to know all the features and constraints at the beginning of the project. Yes, an architect (or customer, in XP) should define features they want, but those things shouldn't be set in stone. Instead, they should be looked at continually throughout the development process and changed to fit estimates and progress.

Monday, March 14, 2011

Extreme Programming Installed, Chapters 19 - 21
by Ron Jeffries, Ann Anderson, and Chet Hendrickson

Chapters 19 - 21 of Extreme Programming Installed is about steering a project and making changes to schedules and estimates along the lifecycle of the software. Whenever you make estimates at the beginning of planning, it's not always easy to be completely accurate because you just don't know how much time is required until you start coding. When steering your Extreme Programming project, you need to focus on projected estimates versus actual coding time, changing priorities in the code, and roadblocks that keep a programmer from completing their tasks.

According to the authors, it's important to keep tabs on and track the project so you can steer the project to success. First of all, the most important thing is to get user stories done. Instead of almost completing all of the tasks, it's better to finish as many of you can and either postpone or throw out the tasks that aren't as important. You should also take the time as you're programming to improve your estimates. If a task that you estimated at 2 weeks ends up taking 3 weeks, you can use that knowledge to help when you're estimating future similar tasks. To keep track of these estimates and progress within the team, it's useful to have a team member who's job is to "track" the rest of the team. They are responsible for checking in with each team member on the status of their tasks, and be there to make decisions if there is a problem with development. For example, if a team member is struggling with a problem in their code, the tracker can make a decision about whether to assign the task to another person or change priorities.

And lastly, it's important to continually steer the project towards release. Keep track of how much work is getting done each iteration; if you can only finish 32 points in an iteration, don't plan to continue with the 40 points scheduled for the next iteration. Instead, you need to take out some user stories and adjust the task so it's on track for a successful release. And you always make sure that you choose the most valuable user stories to complete - in this way, you always have a successful project that could be released and be useful at any point in the stage.


I definitely agree with this process for updating your tasks and schedule along the way. It's not possible to always plan and design everything before you start doing the programming; it's so much easier to start programming and get a better understanding of the project and possible implementation avenues and then adjust your estimates. Being flexible makes it really easy to adjust and make sure your project is successful. And I definitely agree that when a programmer gets stuck on something that other people should step in. A lot of the time, you just need another set of eyes there that might be able to look at the problem from another perspective.

Monday, March 7, 2011

The Inmates Are Running the Asylum, Chapters 1 - 2

The Inmates Are Running the Asylum, Chapters 1 - 2
by Alan Cooper

     In the beginning of The Inmates Are Running the Asylum, Alan Cooper explains that any item crossed with a computer equals a computer. An airplane has lots of mechanical parts, including the engines, the seatbelts, and the bathrooms, but the total behavior of the system is controlled by the behavior of the computer inside. No matter if everything else that is mechanical or human is working correctly, the computer can have an error and halt the progress of everything else in the system. This is because current computer systems don't look at interaction with humans first. Whether it is an embedded computer system inside a submarine or airplane, or a desktop computer that you use at home, the programming and interaction was designed by a programmer. Because of this, systems fail and don't give proper warning messages, are hard to use in general, or do things and you can't understand why it's happening. Cooper argues that the solution is "interaction design" before programming design. We need to design the interaction of the system with its human users and how it communicates and behaves with them before we design how the system will be programmed.
     Cooper also introduces the concept of "cognitive friction." Cognitive friction results whenever the human brain and intellect has problems understanding the complex system of rules that is constantly changing in computer systems. For example, the numeric buttons on the microwave don't have one single function; when in "cook time" mode, the numbers correspond to the amount of time you want your food to be cooked, but when in "cook power" mode the numbers correspond to the amount of cooking power to be applied during that time. It is interaction paradigms like this that cause cognitive friction to the average user. The problem is that the people who design this interaction are programmers themselves; they understand how complex systems work because they have studied this field, and do not think about how users will understand the reaction. They are the "apologists," the ones who fight for software and explain all the good things that come from it, and their users are the "survivors," those who learn just enough to get by but never enjoy using the system. Cooper calls for an interaction designer that is not a programmer and that represents the public and possible users. Since software today is like a "dancing bear," meaning that it is amazing the software can do this task, but it does the task very poorly, having an interaction designer makes sure that systems are protected from the design of the programmers and can be made easier for average users.



     I really don't agree with a lot of things that Cooper says. I know that there are lots of error in programs that we write, and lots of times they can be devastating and fatal. But having more interaction design in the airplane navigation system that didn't alert the pilot of the fatal course change wouldn't have solved the problem. Extensive testing is more important for errors like that, where a key facet of the underlying functionality did not give correct feedback. But, I do agree that having an outside force in the construction of interactions in desktop and other low-maintenance software can be effective. This is because this person doesn't know the low-level details and can look at the system without bias. But, I really don't like the way that Cooper says programmers never understand real interaction. I am a programmer, but when I'm not programming, I don't view the world as a programmer. I am not able to understand some appliances or new things that I am exposed to just like normal humans. It has more to do with immersion and experience, and that a user needs to learn how to use the system. This is because not every complex system can be made a lot easier for all users.
    

The Mythical Man-Month, Chapters 1 - 3

The Mythical Man-Month, Chapters 1 - 3
by Frederick P. Brooks, Jr.

     In the first chapter of The Mythical Man-Month, Frederick Brooks, Jr., introduces the field of programming and the inherent benefits and problems with the field. Programming is rewarding and fun because we're building something tangible and something that users in the real world will experience. The act of creation of complex interactions and interwoven parts is intellectually stimulating, as is the learning process that takes place along the process of program development. But, programming large-scale products and systems usually encounter the "tar pit." Large-scale projects get sucked down and do not meet calendar deadlines, have to drop features, or may fail completely. Even the most ambitious and insightful programming projects can be claimed victim by the programming tar pit.
    Brooks explains that more programming projects fail because of calendar deadlines than any other reason. This is because estimating progress is extremely hard within a software team. Usually the team will confuse actual effort and time with progress, both because the teams are working hard but not actually producing useable code or because tracking scheduling and progress is done poorly. And when the project gets far behind schedule, the first thing that a programming manager will do is add more manpower. Brooks argues that men can not be interchangeable with months, and using the "man-month" to estimate software development is a dangerous and ineffective process. Because most large-scale systems have sequential dependencies, meaning that one part of the system or testing another part of the system is dependent on the progress of another worker, just adding more men isn't going to solve the problem. You will have to take time to train the new workers in the task you are completing, and the additional communication required (especially in a complex system) will actually make your task later than if you hadn't added any new programmers. The number of months required for a task depends on its sequential constraints, and you can't make that number any smaller by adding more programmers.
    The obvious solution is to do all programming tasks on smaller teams, to reduce the time needed for training and communication. But, with large-scale products that might need the equivalent of 5000 man-months to complete, this is not feasible. Brooks presents a model to cut a large system into separate tasks, and reduce the number of people that are in charge and have to communicate. The "surgery team" system is built upon the interactions in a medical surgery room: the surgeon is the main programmer, who does most of the work. The co-pilot is his right-hand man, ready there to give him advice and sometimes do some coding. We also have secretaries, language lawyers to give advice on possible implementation schemes with the programming language, and editors that manage the product's documentation. In all, the surgery team paradigm can reduce communication between 200 works to only 20 "surgeons," making the project more efficient.


     I thought the first 3 chapters were particularly interesting and insightful. These are the kinds of things that are written in the Extreme Programming Installed book that we're reading as well, but this book was written in 1975, when there weren't a ton of software companies. I agree that adding more people to a team makes a project much harder, because I am always more happy when I'm working on a small team or by myself. With larger teams, it's really hard to coordinate who is working on what, meeting times, and questions about code. I really like the idea that in a large-scale system we have lots of small groups that are assigned tasks. So that group can focus on their own task with minimal communication with other groups, and one person can be put in charge as a delegate between the other teams.

Saturday, February 26, 2011

Extreme Programming Installed, Chapters 16 - 18

Extreme Programming Installed, Chapters 16 - 18
by Ron Jeffries, Ann Anderson, and Chet Hendrickson

     In Chapter 16 of Extreme Programming Installed, the authors summarize a list of things that you should and should not do in the Extreme Programming methodology. Mainly, they focus on the issue of designing for the project. Design should not be done all at the beginning of the project; design should be done throughout the project. Requirements change, and as you start writing code you start to understand exactly what the customer meant and how you can implement that section. Everyone should be involved in design, and there should be lots of communication throughout the process. Instead of producing giant design documents at the beginning, layout diagrams, or freezing the requirements before coding is even started, design a little bit and update as you go. Usually if you try to do everything at the beginning, it will all change and become obsolete anyway. Don't waste your time if you're just going to throw it away. And always code for "today" only, not for "tomorrow." This means that you follow the customer's priorities exactly and don't try to generalize or expand the program to make it easier to add things later. It turns out that if you design simple and easy-to-understand code then adding things later will be very easy.
     Chapter 17 focuses on the estimation of tasks, and how experience can help you with that estimation. When estimating tasks, you won't always be right at the beginning. Just give your first estimate and go with it. As you continue to program for that task, watch the amount of time that it takes to actually code the solution. Don't include time thinking, or planning, or drawing any diagrams. If you over or underestimate, learn from your mistakes so that you can better estimate tasks in the future. Once you learn to estimate the size of tasks accurately, it is really easy to decide that a new task is equal, half, or double the time of the last task that you completed.
     In Chapter 18, the authors focus on how to track your programming, design, and project progress for the programmer, customer, and manager. In XP, it is important to track a number of different things: scope, which includes the number of stories total, in progress, and completed; quality, which concerns the number of tests that have succeeded on the code over time; and time, concerning the release schedule and iterations that have been completed. And nothing else needs to be tracked. The success of an XP project is measured in Resources, Scope, Quality, and Time. Another thing you need to do is watch your team closely and how they are acting. If they seem nervous, tense, and mad at each other, then there is a problem with something and this will give you a better indication of something going wrong before you will notice it in your tracking metrics.

     I definitely agree with the do's and don'ts that the XP authors put forth. When you do all of your design at the very beginning, you don't really know all that the coding will entail. In my experience, you need to sit down and start writing code. Even if that code doesn't make it into the final product, you have a much better understanding of the steps required to finish the solution. Why write down a possible procedure for the programming and then later realize it's not feasible and you must find another way to do it? All you have done is wasted the time it took to write the requirements document. Instead, try a little bit of coding that will tell you if it's possible or not, and write the documentation once you're sure that you can do it that way. In this way, you still have the documentation to keep you on the right track but you're also sure that they way you wrote down is feasible.

The Design of Future Things, Chapter 6

The Design of Future Things, Chapter 6
by Donald A. Norman

     The sixth chapter of The Design of Future Things is the last chapter in the book that gives design suggestions. The final topic that Norman addresses is the issue of proper feedback. Feedback is essential to inform the user what is happening with a device, if it was successful or not, and what the user should do next. With completely autonomous devices in the future, it is even a bigger mystery to normal consumers as to how a device works or why it's doing what it's doing. Proper feedback can ensure that new user can understand and adapt easily to the new technology. Norman lists six reasons that feedback is important: reassurance, process reports and time estimates, learning, special circumstances, confirmation, and governing expectations.
    An issue with new technology is that the feedback is arbitrary. A red light might turn on on the surface of the device, or it may start emitting a loud beeping sound, but what does that mean? Usually the user can't figure out what these forms of feedback mean without either using the device for a long time or reading the manual. Norman, as he has in earlier chapters, emphasizes that the feedback should come naturally from the mechanics of the device. For example, early cars didn't have power steering and so the driver could feel the road, and adjust accordingly. Whenever power steering was introduced, drivers had problems and didn't feel safe because they couldn't feel the road anymore. So, artificial feedback in the form of rumblings and bumps like driving before power steering is now introduced so the user can have feedback. This is much more effective than if car manufacturers installed special lights or beeping to try to give the same feedback.
    In the end of the chapter, Norman summarizes his design advice for the creation of future systems and machines:

  1. Provide rich, complex and natural signals.
  2. Be predictable.
  3. Provide a good conceptual model.
  4. Make the output understandable.
  5. Provide continual awareness, without annoyance.
  6. Exploit natural mappings to make interaction understandable and effective.
He finishes by explaining that most of the technology that he mentions in the book is far from being realized; the most important thing that must be overcome is the lack of intelligent communication between machines and humans, and the lack of common ground.


     While this chapter (and most the chapters in the book) seems to cover material he already talked about, Norman does a good job summarizing his design rules for feedback and communication between devices. I agree that the beeps and lights on our devices today are pretty arbitrary and wouldn't make sense to a new user. But, not everything that we use has a way of natural feedback. For example, the microwave makes a little bit of noise whenever it's cooking, but with the TV on sitting on the couch, I wouldn't hear the over turn off without the signature beep. I don't think the issue is that all feedback should be natural and exactly related to the operation it is happening, but instead that we need to give users more education and get them used to the feedback that we will be giving them. It isn't arbitrary if you have learned to associate a light or sound with a particular state that the device is in. And a national or world standard on colors and sounds with appliances would help because the user would only have to learn to paradigm once and it would apply to all of their devices.

Monday, February 21, 2011

The Design of Future Things, Chapter 5

The Design of Future Things, Chapter 5
by Donald A. Norman

     The fifth chapter of The Design of Future Things focuses on the nature of automation and what the future of design is in regards to automation. The problem with pure automation, according to Norman, is that machines try to guess human intention and emotion, often failing. Automation works when the task is clearly defined, and doesn't have a lot of variation. For example, a "smart home" that controls the temperature and lighting in the house based on feedback from a human user is successful automation, as are the automated transit cars that transport people between terminals in an airport. But, as Norman gives an example, automation does not work when the task has a lot of variation and unpredictability. In an automatic baggage sorting system at the airport, all the baggage are of different sizes and weights, and the tags are in different locations. An automated system doesn't work in a situation like this, and human input is instead needed.
     Norman calls for "augmentation" instead of automation. With augmentation, intelligent systems provide help and suggestions in order to help us with difficult tasks, or make boring tasks easier and faster. These systems are not intrusive because the human always has a choice whether to use the augmentation help or not. Machines must be added that support human activities instead of doing the activities for us, and helps us finish things easier and faster. We should be glad that we used the system, whether an automated or augmentation system, instead of it stressing us out. Future designers should look to automate things that are hard for humans to do, are dirty or dangerous, and should look to augment things that humans regularly do and that are everyday activities, like personal hygiene, entertainment, and controlling the environment of their home.


     I thought that this chapter was one of the most sensible chapters that Norman has written so far. In it, he talks about how machines need to complete users and not try to do everything for them. In previous chapters he seemed to talk only about cars that drove themselves, and angry and stubborn intelligent devices that did what they thought was best instead of listening to their user. I believe, like Norman, that the future of technology is assisted-devices that help us do things easier for ourselves. So, instead of doing the task for us, it allows us to do the task safely and more efficiently. In this way, humans always have a say on whether the task should be done or not, we have more control over the outcome and the procedure, and we can stop at anytime, always overriding the machine.

Extreme Programming Installed, Chapters 13 - 15

Extreme Programming Installed, Chapters 13 - 15
by Ron Jeffries, Ann Anderson, and Chet Hendrickson

     Chapters 13 - 15 of Extreme Programming Installed focus on testing and releasing your code. The most important point of Extreme Programming is that testing is done first, along the way, and 100%. This means that before you even write any code, you first write a test for that object or class. For example, if you're testing a class that will compute the sum of a list of numbers, you can first test what happens with an empty list. The test will fail, so write the code that makes the test pass. After that test passes, you might test what it outputs with a single number in the list. Again the test will fail, and you should write the code that will compensate for that test. The point is that we test first, and bit by bit as we are writing the code. And every time we add code we make sure that 100% of the unit tests pass before we are allowed to release the code.
     When we are writing code and testing, we also make sure to code "by intention." This means that we write the code based on what we need to do, not exactly how we're going to do it. For example, if we need to compute the roots of a quadratic equation, there are a couple steps. First, we need to compute the discriminant, then take the square root of that (if it's not negative), and then compute -(b) +/- discriminant over 2*a*c. When coding by intention, we will first make function stubs for each of the steps (compute discriminant, etc). Then, in main, we will use each of those functions to calculate the roots. After running the tests for the whole system, they will all fail. Then we will worry about exactly how we compute the discriminant and all of the details. The main point is that first we programmed our intentions.
    The last chapter in this section dealt with code releasing and version control. The most important point is that Extreme Programmers release often, usually more than once a day. They don't wait for other people to finish the work that their code may depend on, and don't even really check to see if someone else is editing the file at the same time (at least past what the versioning software will do). In practice, you will spend more time trying to avoid conflicts instead of just working and addressing conflicts if they do show up. The versioning software that you use should make committing releases easy and quick; if programmers are releasing their code less often because of the versioning software, you need to change the software and make it easier for the user.


     I think that test-first development is a great idea. Writing all your tests first helps you understand exactly what you need to accomplish in your code and helps keep you on track. But, in my experience, it is horribly boring and annoying to do. As a programmer, you want to start working on the real code, on the code that makes the product work and what customers will directly interact with. But instead you're writing a test for a class that you haven't even written yet. Is that test going to fail? Well, duh. It's hard for me to see the value in writing a test before you have even written any code. Maybe just writing it, but running the tests? It doesn't take a genius to know that they're going to fail. I think it would be much better to write some of the code and then start testing on the way. That way you're not wasting your time writing tests that you know are going to fail but instead you're writing tests to help you find problems with code you are currently working on.


    

Sunday, February 13, 2011

Extreme Programming Installed, Chapters 10 - 12

Extreme Programming Installed, Chapters 10 - 12
by Ron Jeffries, Ann Anderson, and Chet Hendrickson

      Chapters 10 - 12 of Extreme Programming Installed make the transition from design and planning to the actual task of programming for the user stories. The first step is to have a quick design session, designed to eliminate the fear that you may not know what to do. This design session shouldn't last more than 30 minutes, and usually consists of drawing a few UML designs, or talking about different implementation possibilites. But, usually the best thing is to get started programming right away; it's hard to know exactly how anything will work or how you will build the system until you sit down and start programming.
     A key aspect of XP programming is that the code is owned by everyone in the programming team. Even if you created a class, anyone else can make changes to it. In this way, if a class you're using isn't exactly written the way you need it, you can change it yourself, instead of having to go through the creator. The way that this system works without a lot of conflicts is that the programmers release their code often, sometimes up to three times a day, and each time they make a change they run unit tests. These unit tests must always pass 100%, and the programmer must fix all the errors and make all tests pass before they release. In this way, conflicts between programmers changing the same file can be found and addressed quickly. Another aspect of XP programming is that we program exactly for the specifications, and as simply as possible. We implement the minimum number of methods, in the simplest way possible, commenting the code and naming functions and variables to make it easy for anyone to understand the code. In this way, it makes it much easier to come back and add new methods or edit the way a function returns data. This also means that the programming team must have a coding style standard for indentation, naming, capitalization, etc to make the code easier to read.
      The biggest change from traditional software development in XP programming is "pair programming." In pair programming, all code is written by a pair of programmers working at the same computer. When two programmers are working together, they end up producing more code, don't get tired as faster, and two people understand the code rather than just one person. There are two roles in this paradigm: driver, and partner. The driver is the one typing the code on the computer, and has most of the general algorithm in their mind. They are responsible for keeping the partner engaged, explaining the code as he goes. The partner is responsible for watching the code for typographical errors, making sure the driver is following the algorithm correctly, and making the driver clarify and fix things that are hard to understand. Driver and partner should switch roles often so they don't get bored or stale. Using pair programming, the programmers work much more efficiently, get a lot more work done, and stay engaged easier.

     The idea of collective code ownership seems like an extremely good idea to me. When you know that you can always fix all the problems that are in the way of your code development and make other pieces of code work for you, you can finish your code much faster. The only problem is minimizing conflict between intentions and other programmers' code. While they say that you can manage this by testing and releasing all the time, it still seems like two programmers could change the same thing back and forth without agreeing on a solution they can both use. Pair programming also sounds awesome; I know that when I'm working on code I have problems staying engaged and always working. With another person there, they could fix my mistakes much earlier and keep me on track.

Saturday, February 12, 2011

The Design of Future Things, Chapter 4

The Design of Future Things, Chapter 4
by Donald A. Norman

     The fourth chapter of The Design of Future Things is about machines making decisions and taking control back from their human users. Newer technologies have more and more "automation" options, that take the difficult or boring tasks away from the human user and are completed by the machine. Norman argues that technology is mostly under human control - while things are automated to make life easier, the human user always has control over the operations, including stopping, starting, and making changes to the task the machine is doing. But in today's world, automation has taken over tasks that previously required human input, which can be a dangerous thing. Intelligent devices are useful and effective when they have well specified tasks, or in a setting where the people that control and use the devices are specialized and educated. But when these intelligent devices are used in the home or the car, and with average citizens, it can be dangerous. The average human doesn't know how their car works, or how it might make decisions about how far to be away from the car in front of it. Because of this, when the automation fails, the user may not notice or know what to do without the machine's help, causing an accident.
     Norman argues that automation of our future machines must be all or nothing; either the task is completely manual, or it is fully automated and reliable. Even though partial automation has reduced accidents and made our lives easier, the transition from automation to manual control causes more dangerous accidents than before. When a system is usually automated, the user will not be paying as close attention, and will not have good situation awareness. Then, when there's a problem, they're distracted and can't react quick enough. Norman believes that full automation is coming, but the road from manual to fully automated systems will be hard traveling.

     In a way, I agree with Norman's view of automated systems - when you get used to how something is just "done" for you by the machine, you aren't watching for errors or accidents that could occur. When I'm cooking something in the microwave, I don't watch to make sure that it is cooking correctly, and have my hand on the power cord ready to unplug it if there's a fire. Instead, I trust the automated cooking, and I might walk away for a few minutes. Then, if something catches on fire, I won't be in the room, and it could cause my whole house to burn down. But because we trust the automation so much, the average user isn't going to sit and watch the machine the whole time. Although I think full automation is scary (trusting a car completely to drive you somewhere), it isn't implausible. I just think that the best users for those kind of machines are those that are born and grow up with them. It will be hard for current drivers to trust a new machine, but those users that grew up using them will be the most comfortable with the new technology, which applies to really any kind of new technology.

Tuesday, February 8, 2011

The Design of Future Things, Chapter 3

The Design of Future Things, Chapter 3
by Donald A. Norman

     In this chapter, Norman talks about "Natural Interaction" and how our machines should interact with us in a way that is natural to their operation. For example, a kettle of boiling water can be heard audibly as it gets hotter, and as the steam slowly makes it's way out. Then, whenever the water is boiling, the air is forced through a small hole producing a whistling sound; this makes sense to the user because boiling water releases steam. But, when you're using a microwave or the dishwasher, a loud beeping noise isn't a communication in a natural way. This arbitrary beep isn't really related to the natural act of heating up food, and unless you have experience with the different tones of the machines, you wouldn't know which appliance had beeped. Another concept Norman addresses is "affordances," and this concept goes hand-in-hand with natural interaction. An affordance is a way that we can interact with an object in this world, and an object "affords" an interaction because it makes sense to us in some subconscious way. For example, a doorknob "affords" turning, and a button "affords" pushing; in this way, we know exactly how to physically interact with the machine / object even if we haven't seen it before. Norman suggests that future machines should not only have natural ways to interact and communicate with us but should also have natural affordances so they make sense to use. In this way, machines can give us information by interacting with us physically; if we're going too fast, the steering wheel in a car can push back at us or tighten the seat belts.
    Towards the end of the chapter, Norman talks about the perceptions that humans and machines have of each other. With the new suggestive systems that are always trying to guess what we're thinking or predict our actions, the machine's actions now become predictable. If we assume that they're going to act in a way that reflects our interests, we could be wrong whenever the machine has predicted incorrectly. And this could be dangerous to humans. Norman says that machines should be predicable, because humans will never always act predictable. Instead of trying to guess what we want and doing that, the machine should follow a set course and always let us know what is happening through a "playbook." This playbook should explain how the machine is working, and why it made the decisions that it made. It could be presented as a video showing the steps of a process while it's happening, or through natural interactions and sounds coming from the operation of the machine.


     For once, I actually kinda agree with Norman on the first part of this chapter. The beeps that come from our machines are arbitrary and don't make sense with the operation that is happening. But, when I can hear the water moving and washing the dishes in the dishwasher or clothes in the washer, I know what is happening and how it's working, and that's how I judge when the operation will be done. I don't agree with his idea of a car that has physical feedback. Maybe in the older days when that kind of feedback was normal in society, but today, the people have already gotten used to the interactions of technology. We shouldn't reinvent the computer to be more natural, because that would be confusing for those who grew up using the new keyboard and mouse design. And this applies to many different technologies - even though the interaction might not be natural or the sounds aren't natural, this doesn't mean we need to make it that way, especially if most of the population already is used to using the product in the new interaction paradigm. The only time that natural interaction should be used in a new system is if the product is completely new, and no one in the population has ever used it before. For example, if jet packs became traditionally used, and I was using one for the first time, I wouldn't want it to beep at me arbitrarily. Does that mean that I'm about to fall out of the sky, or just that I'm doing a good job piloting? I want it to explain to me in some natural way if there's danger. But as I said, this only applies to completely new technologies where there already isn't a learned interaction paradigm. With existing systems, you're just going to confuse people and make them mad if you change the way they interact with their machines.

Extreme Programming Installed, Chapters 7 - 9

Extreme Programming Installed, Chapters 7 - 9
by Ron Jeffries, Ann Anderson, and Chet Hendrickson

     In Chapters 7 - 9 of Extreme Programming Installed, the authors address the issue of release planning and the way that user stories are estimated and scheduled in the Extreme Programming paradigm. Small and frequent releases are important because not only do they allow the customer to have a working product sooner, but they also allow for customer feedback throughout the development process. Each release shouldn't be just a demo of what the product could possibly do in the future, but instead should be a real product that the customer can start using on a regular basis. The authors also give examples of large-scale projects that can't seem to be broken down into small releases, and show ways that incremental pieces can be delivered. For example, if you are tasked to build a "Distributed Manufacturing Control System" consisting of microcomputers that talk through a distributed network and control each machine of a factory, it would seem that you need the whole system working before it could be released. But, you could program for and create the microcomputer for a single machine, and allow it to communicate mechanically with the existing legacy system. Then, you could incrementally add functionality to each machine in the factory, making the system more efficient along the way to the final system.
     When planning releases, it is important that the Customer as defined in the first few chapters is in charge of planning the user stories that will be completed in a release. As before, the Customer presents user stories to the Programmers, who each look at the stories, ask for clarification, and estimate the time it will take to finish it in points. Then, based on the number of points that the team can finish in a week together, the Customer can pick the most important user stories that can be completed by the release date. After this, the Programmers can each sign up for the user stories, making sure they don't pick too many and overshoot their capacity. An important note is that each Programmer should sign up for a user story alone, or with a partner, and not split up tasks between different Programmers. This isn't to say that another Programmer isn't allowed to help you if you're stuck; this is so that tasks in a user story aren't forgotten and the story never finished.

     After working at a major company this summer, I learned a lot about Extreme Programming because we used the system (or a variation of SCRUM, really) at the company. While we didn't have actual note cards for user stories, we had an online system that allowed us to add user stories, define tasks for those stories, and then assign a point estimate to them. Like in these three chapters, we would have release or iteration planning meetings where we would introduce each user story, and we would all vote on the number of points we thought it would take. When we had a consensus, that was recorded, and a programmer signed up for the task. From experience, this system works EXTREMELY well. We never once had a problem with estimating incorrectly how long it would take to finish a task, and so whenever we got to the end of the iteration / release, there were only a few features that didn't make it in. And because we were working on incremental releases to a new product, most of the things that didn't make it were minor bug fixes concerning strings or something. Also, having a "Customer" in charge of choosing the most important user stories for an iteration is extremely useful in giving the programmers direction.

Sunday, January 30, 2011

The Design of Future Things, Chapter 2

The Design of Future Things, Chapter 2
by Donald A. Norman

     In Chapter 2 of The Design of Future Things, Norman delves into the topic of the psychology between man and the technology we create. As our machines and technology become more sophisticated, the interaction between humans and those technologies are more important. Norman divides the human brain's processing into three categories: visceral, or instincts, such as when we recoil from a hot stove; behavioral, which includes basic motor skills, tasks, and learned procedures; and reflective, which includes self-image, higher reasoning, and higher consciousness. Our technologies today serve to take away some of the necessary processing, and we interact with them in these ways. Our car, for example, may take away the visceral reaction to a bump in the road by cushioning the tires with shocks. We interact with the car behaviorally by turning the wheel and pressing the pedals. But, our car does not communicate with us reflectively - that is, we can't have a conscious decision with our car.
    The reason why communication between humans and machines isn't satisfying or completely possible is because machines and humans have no common ground together. Machines may have lots of sophisticated reasoning systems and sensors built into them, but they can never compare to the complex psychological system that humans have for making decisions and communication. So, the future of smart refrigerators that caution you from eating unhealthy food or the car that programs a scenic route for you is not readily possible. Norman explains that systems that just "do" or just "demand" from you will never be successful in interactions with humans. Instead, we need our machines to suggest to us, in a more conversation-like manner, and explain themselves.


     While Norman's points make sense, I don't think that machines need to learn to be polite and explain all their actions to us. One of the benefits of technology is that it does the things we need it to, and we don't have to understand how it works. I don't see our technology making executive decisions to us and not allowing us to eat an egg, for example. If our machines are going to tell us anything at all, it needs to be a suggestion, and they need to understand that humans are always the authority. In this, I agree with Norman. But, I wonder if we really need all of our technology to start suggesting things. I have never used a media suggestor that gives suggestions based on what music or movies you already like, and I don't think I need a fridge that tells me I'm drinking too much or a car that yells at me for going too fast. While it would be nice to have automated cars, and other automated things, I don't want the reflective side automated. I want to be able to make all the choices about where we're going to go, and the route, and let the car do all the visceral and behavioral things like steering and breaking.

Extreme Programming Installed, Chapters 4 - 6

Extreme Programming Installed, Chapters 4 - 6
by Ron Jeffries, Ann Anderson, and Chet Hendrickson

      In Chapters 4 - 6 of Extreme Programming Installed, the authors present the idea that is integral to Extreme Programming and Agile Programming development practices: the user story. A user story is a description of something that the user should be able to do, which is usually called a feature. Each user story represents some programming work that should be done, and should specify the inputs to the procedure, the correct outputs, or the options / things that the user can do with it. When the design is done in user stories, there are more incremental updates to the product with new features being added. The benefit is that there is always a working product to show to a customer.
     At the beginning of the planning process, the customers write all the user stories on index cards, and give them to the programmer. The programmers look at each one, ask for clarification if needed, split a user story into multiple ones if it's too large, and estimate how long it will take to do. The estimation is done in "points," which each represent a "perfect engineering week." A perfect engineering week is the amount of work you could get done if you were allowed to program with no interruptions for a week. So if a user story is given 2 points, it means that it should take about 2 weeks to finish.
     The customer is responsible for specifying acceptance tests for each of the user stories; in this way, the programmer knows exactly what functionality must be there and knows when they're done programming for the story. In Extreme Programming, testing is not a phase that is done at the very end of the development process. Instead, automated tests are built that are constantly run. This is so that errors can be caught after a programmer has just finished coding the section, making it easier to remember what changes you made and where the error might be.


     I really liked this section because it perfectly describes the way my group worked when I interned this summer. We made user stories and tasks for the user stories (online though, not with note cards) and we estimated points for each user story. This really helped keep the group organized and let us all know exactly what work we needed to do, and the time frame for getting it done. And although writing automated tests and tools can be time-consuming, and some programmers may have that as their only job, I know from experience that it's worth finding errors as soon as they're made. I can't tell you how many times that I've forgotten how I coded something in a school project and had no idea what to change to fix it.

Sunday, January 23, 2011

Extreme Programming Installed, Chapters 1 - 3

Extreme Programming Installed, Chapters 1 - 3
by Ron Jeffries, Ann Anderson, and Chet Hendrickson

      In the first three chapters of Extreme Programming Installed, three different personas in the programming process are introduced. They are the Customer, the Programmer, and the Manager. Each of these people have a specific job and duty, but the core idea of Extreme Programming is communication. The Customer is in charge of communicating specifications for the software and being always available to everyone. The Programmer is in charge of implementing the specifications that the Customer provides and going to the Customer to get clarification on requirements. Lastly, the Manager is in charge of doing anything possibly to make the Customer and Programmer's jobs easier and faster, and to foster communication between different members on the team. Everyone in this team is equal and is working towards the same goal.
      The Extreme Programming paradigm emphasizes easy communication. The customer is not an abstract person that the product is being developed for but instead a physical person that is on-site (preferably) and is heavily involved in the development process. As in the diagram below, the Customer writes stories about features that are needed, and the Programmer implements those stories. Open communication will allow obstacles that come up to be overcome quickly because the programmer can quickly get feedback from the Customer - which is made a lot easier if the Customer is already on-site.

      I felt that a lot of the ideas in the first 3 chapters of Extreme Programming Installed were very valid. This past summer I interned at a major software company, and we used a variation of Extreme Programming. What made collaboration, programming, and communication so easy was the feeling of equality. Everyone was equal, whether they were the software manager, project manager, programmer, intern, or customer representative. That way, when there was any ambiguity or questions, that person could easily get an answer instead of guessing at the code, as they describe in Chapter 3. And the emphasis on physical and oral communication instead of email or phone calls is also correct; when it's not possible to walk to the person's office and ask a question, most the time that communication will not happen, leading to problems in the development process.

Saturday, January 22, 2011

The Design of Future Things: Chapter 1

The Design of Future Things, Chapter 1
by Donald A. Norman


       The first chapter of The Design of Future Things presents Norman's basic argument for the entirety of the book: machines and future technology must get better at communication with humans and learn to understand their limitations and when to relinquish control. Norman argues that true "artificial intelligence" is not possible in the near future. Current systems are not intelligent but are instead a bank of possible outcomes that designers have programmed the system's reaction to. But, it is not possible for us to program all possible outcomes - we will always forget at least one. Norman says that instead of programming each possible solution, we need to program our machines and technology to listen to us and react with better communication, acting in a "symbiotic relationship." If machines can recognize what they're good at but also recognize their limitations, then we can take advantage of technological advances without the risk of being controlled or overruled by the decisions that our creations make for us.





      The first chapter of this book was very interesting because it raises some interesting questions about the growth of technology, specifically: "What happens when technology thinks that it's smarter than us?".  While we have all of these new versions of traditional technology: cars that can sense other vehicles, washing machines that can detect the size of the load, and recommendation systems that claim to know our preferences, what keeps these technologies from overruling the decisions that we make? I agree with Norman in that we need to have limitations on the authority of our technological devices, thus having them "recognize" that if a human turns a feature off or makes a decision that they must know best. The problem is that human interactions are so subtle and different between cultures, situations, and people that it is impossible to accurately program this in. There is no way to completely cover all your bases, and also impossible to program a device that can make decisions in a human way. While Norman agrees with this and says that such innovations are many years away, it is hard for me to imagine a time like that at all, no matter how far in the future.

Tuesday, January 18, 2011

Introduction!


What is your name?
Aaron Loveall

What will you be doing after graduation?
Working for Cisco Systems in Richardson, TX

List your computing interests (HCI, information retrieval, databases, etc.)
I'm definitely interested in HCI, haptic and touch screen systems, and gesture and motion control. I'm also interested in mobile platforms, entertainment and media distribution, and gaming.

List your computing strengths (a language, focus area, etc.)
Extremely fluent in Java and C++. Also have LOTS of experience coding for android phones and also some experience in iPhone development. I am very good at debugging code, and usually can figure out a solution to a problem by sitting and coding in large blocks of time.

What was your favorite computer science project that you worked on and why?
My favorite computer science project that I worked on was the final project for CHI. We worked on an android app that was used to review flash cards; you could add, edit, and delete flashcards on a remote website that communicated with the android phone and had an easy touch-screen swiping interface to view the cards. It gave me a lot of insight into how the phone worked, and definitely helped with my android coding experience.

What was your least favorite and why?
My least favorite CS project I worked on was the final project for CSCE 441: Computer Graphics. It involved taking motion capture data and interpolating it using a system that we were just given with no explanation. It wasn't that I didn't like the subject of the programming - I actually did a lot - but we were given this system that had 10,000+ lines of code and didn't make any sense. It would have taken weeks to go through the code and understand how it worked so I just didn't do the project.

What do you see as the top tech. development of the last 5 years and why?
The top technical development of the last 5 years was taking the "computer" and all of it's typical features and putting them in our pockets. Now the iPhone and Android devices have most of the features that personal computers had 5 - 10 years ago, and are even faster and more intuitive to use than before. These devices have so much strength and potential with their app stores that a normal person could get by with just one of them (for email, calling, internet browsing, videos and music) and wouldn't even need a real computer.

Provide some insight into your management/coding styles. This could include your preferred coding method, how you use line breaks, what time of day you work best, or any other relevant programming-related facts
I code best by myself, and like to have a lot of control over the organization and structure of the code. I comment a lot, and make sure that my code is always well-formatted and legible because I am a little OCD and like things to be neat. I also have a hard time just sitting down, getting into the coding, and coding for just a short period of time; but if I'm in the right mood, I get most of my work done by sitting down and programming for straight 5+ hours, if I'm in the right environment (can't be too quiet, but things have to be happening around me).

Make sure to include a picture of yourself: