This post concludes the trilogy to Issue Driven Project Management or IDPM (see Part I: Make No Assumptions and Part II: A Technical Review for Your Thoughts). This time around, we were given only three commands to implement and thus, less time to complete the project; nevertheless, this did not stop this project from being a challenging one. From Parts I and II, we have already established that IDPM was difficult enough as is to put into practice. But here, we were given the challenge of building three new features into the code base belonging to the team whose system we, Team Pichu, had reviewed.
For the project, the three commands we had to implement were set-baseline, monitor-power, and monitor-goal (the project specifications describe them in more detail). The set-baseline command takes a given date and stores the power consumed for each of the 24 hours in that day. The monitor-power command continuously outputs the current power consumption for a given source at specified intervals in seconds. And the last command, monitor-goal, outputs the current power and states whether or not it is meeting its power conservation goal, as set by the user-specified goal and a prior call to set-baseline. Fortunately, we were able to successfully implement all three commands, but at the cost of a late submission on my part.
Many issues arose during development, including the usage of two Java objects that were new to us: Timer and TimerTask. A Timer object may start and stop multiple TimerTask objects, running each one once or at specified intervals. Before I could work on monitor-goal, our team had to make sure that monitor-power was working first, since monitor-goal builds upon it. It was odd that for each interval, over 5 of the same values for the power consumed would show up. Ultimately, we learned to use Timer's schedule method instead of scheduleAtFixedRate and not to call TimerTask's run method, since Timer's schedule method already calls run once.
The monitor-goal command took me a lot longer to implement than expected. One of the challenges I faced was how to implement the check for each hour without having to hardcode all 24 if-else blocks (and a simple extra whitespace had me debugging for another hour). In the end, I chose to go with the if-else blocks only because I was pressed for time. Perhaps if I had had more time to think about it, I could have implemented it in a different manner. Due to the fact that final projects for other classes took up much of my time, I made working on Version 2 a lot more difficult for me than it should have been. Initially, I had thought I could improve upon the original code base and fix a few bugs with the error-handling, but since that wasn't a priority and time was out of hand, I had to drop it as an invalid issue. It is rather surprising that we weren't required to fix the original code base created by the team whose system we reviewed. Nevertheless, I anticipate that the opposite is true as an industry best practice.
Group communication between all members of the team, this time around, however, was much, much better. We all got back to each other within a day or right away, even if it was to say that we couldn't work on the project at the moment. While communication was a lot better, I learned that it alone wasn't enough to move the project along at an appropriate pace. Motivation, physical and emotional drain, and time management issues are also strong factors that can affect the overall quality of a software project. Overall, despite possible incurred costs, I believe that the quality of our implementation of new functionality to an existing system was quite decent. It could have been better, but then again, software engineering is a full-time job, and as full-time students, there had to a tradeoff. In relation to the Three Prime Directives of Software Engineering as elaborated on in Part II, the system we first reviewed and then improved upon definitely completes a successful task and is easy enough for the user to download, install, and use. And although the code structure could be improved to streamline the addition of new features, it was moderately simple for a developer to improve upon the system.
Marifel Barbasa -- Software Engineering Blog -- Fall 2011
Dr. Philip M. Johnson -- Information & Computer Sciences Course 314
University of Hawaii at Manoa -- Honolulu, HI
Wednesday, December 14, 2011
Friday, December 2, 2011
A Technical Review for Your Thoughts
One lesson learned in software engineering is that a technical review is the most optimal method of project inspection. Let's face it: when things are written down, they're easier to remember and easier to understand. A walkthrough - briefly going over code or a problem with a teammate - will often leave many things said undone and doesn't provide full coverage of the system at hand.
The concepts of Issue Driven Project Management (IDPM) and Continuous Integration (CI) were introduced in the "Make No Assumptions" blog post. To recap, IDPM is a practice by which multiple developers can work efficiently and effectively by associating tasks with issues, keeping tabs on the statuses of these issues, and allowing all project team members to view all issues so they know exactly who is working on what and who to go to for specific questions on particular sections of the project. CI involves a tool such as the Jenkins server to automate that a system passes verification at all times and requires active and consistent participation throughout the project lifespan from all project team members.
IDPM and CI play key roles in this technical review of the same project requirements my team, Team Pichu, had worked under. (For details about Team Pichu's project, please refer to my prior blog post.) The project specifications are outlined on this webpage, under A24. To summarize, the task was to develop a command line interface program for understanding various aspects of energy and power consumption in the Hale Aloha Towers (freshman dorms) of the University of Hawaii at Manoa campus. The program creates a client to the WattDepot server and queries data that the server accumulates from meters in the Hale Aloha Towers. WattDepot was discussed in this blog post. This technical review addresses the Three Prime Directives of Open Source Software Engineering, which are bolded as section headers in the rest of this post and was discussed in an earlier blog post as a review of the Logism software.
Yes, using every possible tower (Ilima, Lehua, Lokelani, Mokihana) and some lounges, the system exercises every key functionality available on Team Teams's home page: the commands current-power, daily-energy, energy-since, and rank-towers. In addition, the necessary support commands, help and quit, have been implemented. The system definitely accomplishes the useful task of providing energy and power consumption usage in the Hale Aloha Towers residence halls on the University of Hawaii at Manoa campus.
The home page, introduced above, provides a clear description of what the system is supposed to accomplish; however, the bold-faced commands are not explicitly stated as commands, their usage is not given, and there is no sample input and output. The User Guide wiki page, on the other hand, provides more details and is concise so that no user should have trouble downloading, installing, and executing the system. It also provides a link to the downloadable distribution, which was previously downloaded for Review Question 1, and the distribution does indeed provide an executable jar file in the top-level directory by the name of hale-aloha-cli-teams, which uses the proper naming convention for this project. Indeed, the zip file distribution contains major and minor version numbers stamped with the date and time in which the zip file was created.
Testing the system under valid inputs had already been accomplished for Review Question 1. To reiterate, here are the commands used for the test: current-power, daily-energy, energy-since, rank-towers, help, and quit. The following lists the towers tested: Ilima, Lehua, Lokelani, and Mokihana; the two lounges tested for each tower were A and E. The dates used for daily-energy and energy-since were 2011-11-30, while the start date used for rank-towers was 2011-11-30 and the end date used was 2011-12-01. The system responded well to the valid inputs provided by printing the data requested, but it could have handled invalid inputs better by prompting the user with a message (the program only did this sometimes). For example, it failed to handle the empty string "", in which no ">" cursor would show up, yet entering valid commands were still possible. It was the same result (the missing cursor) whenever the number of arguments was less than what was required, and nothing would print out at all if the number of arguments was more than required (but the cursor would at least be there to prompt the user again). Nevertheless, overall, yes, an external user can successfully install and use the system (downloading, unzipping, installing, and executing the system all takes less than a minute, in fact).
The Developer Guide wiki page does indeed provide clear and comprehensive instructions on building the system and even includes tips on IDPM, as well as the team's experience using it. Appearance-wise, it just would have been nice if the commands for steps 2.1 and 2.2 were placed onto the traditional light-gray background. Quality assurance tools such as Checkstyle, PMD, and FindBugs were used and mentioned in the Developer Guide, among other useful standards mentioned, such as using a specific format (a link is provided) in Eclipse. Since IDPM and Development Guidelines were given in great detail, a new developer can certainly ensure that any code they write shall adhere to such standards. And indeed, the guide provides a link to the Jenkins CI server associated with this project. In addition, JavaDoc generation is explained clearly as a subsection of its own on the Developer Guide.
After checking out the sources from SVN using the https URL indicated on this webpage, JavaDoc documentation was successfully generated through the command line by running the command ant -f javadoc.build.xml. The JavaDocs are certainly well-written and informative, provide a good understanding of Team Teams's architecture and the structuring of individual components. The names of the components certainly indicate their underlying purpose, and the system appears to be designed to support information hiding. The only lacking JavaDoc comments are for the default constructors of each class and the package private fields for the Processor class, but then, these fields are explained as in-line comments in the source code, and default constructors don't necessarily exist explicitly in the source code.
Next, the system was tested to build successfully from sources without errors by running the command ant -f verify.build.xml. Coverage information was also successfully generated using the command ant -f jacoco.build.xml. The JaCoCo coverage tool report stated that no tests in the Main and Processor packages were implemented but that 97% of the instructions and 82% of the branches were covered in the Command package. This is great, since the Command package is the meat of the project, but it would have been nice if the team had also provided JUnit test cases for the Main and Processor classes as well (although they do have a manual test class for the Processor class).
Unfortunately, whenever executing the class files from Eclipse, the error java.lang.UnsupportedClassVersionError would show up. I'm not sure if this is due to my Eclipse or the files in question. But as for visual inspection of the test cases in the test classes, overall, their test cases appear to cover nearly all Processor functionality, and there are multiple (five or greater) assert statements in each JUnit test class. In conclusion, the current set of test cases appear to prevent a new developer from making enhancements that would break pre-existing code.
As for the source code, coding standards appear to be followed and multiple but appropriate in-line comments as well as detailed JavaDocs are provided. The code is fairly easy to understand and the comments are just the right amount.
From the Issues page associated with this project, it is clear which of the three developers worked on what so that an external developer can use this page to determine who the best person to ask about a specific issue would be. As for gauging the equality of the dispersion of work, two developers appeared to do an equal amount of work and about half more work than the third developer.
Lastly, the Jenkins CI server associated with this project was examined. Apart from the known outages to the WattDepot server during November 21st to the 24th, there were no build failures. The project began on November 10th and ended on the 29th; overall, it was worked on in a consistent fashion, with only one day skipped a few times (meaning, the project never went for more than one to two days without being worked on). Moreover, by examining the Updates page, it appears that approximately 14 out of 49 revisions were not associated with an issue, meaning that only roughly 71% were, and this percentage is actually even less because quite a few of the revisions were not commits but were revisions to the wiki pages. This is not good, since at least 90% of the commits should be associated with issues.
In conclusion, a new external developer could successfully understand the current hale-aloha-cli-teams system with ease and therefore enhance it in future because of the well-written test cases, JavaDocs, in-line comments, and User/Developer Guide wiki pages. Furthermore, overall, Team Teams successfully accomplished the Three Prime Directives of Open Source Software Engineering.
The concepts of Issue Driven Project Management (IDPM) and Continuous Integration (CI) were introduced in the "Make No Assumptions" blog post. To recap, IDPM is a practice by which multiple developers can work efficiently and effectively by associating tasks with issues, keeping tabs on the statuses of these issues, and allowing all project team members to view all issues so they know exactly who is working on what and who to go to for specific questions on particular sections of the project. CI involves a tool such as the Jenkins server to automate that a system passes verification at all times and requires active and consistent participation throughout the project lifespan from all project team members.
IDPM and CI play key roles in this technical review of the same project requirements my team, Team Pichu, had worked under. (For details about Team Pichu's project, please refer to my prior blog post.) The project specifications are outlined on this webpage, under A24. To summarize, the task was to develop a command line interface program for understanding various aspects of energy and power consumption in the Hale Aloha Towers (freshman dorms) of the University of Hawaii at Manoa campus. The program creates a client to the WattDepot server and queries data that the server accumulates from meters in the Hale Aloha Towers. WattDepot was discussed in this blog post. This technical review addresses the Three Prime Directives of Open Source Software Engineering, which are bolded as section headers in the rest of this post and was discussed in an earlier blog post as a review of the Logism software.
Review Question 1: Does the system accomplish a useful task?
Yes, using every possible tower (Ilima, Lehua, Lokelani, Mokihana) and some lounges, the system exercises every key functionality available on Team Teams's home page: the commands current-power, daily-energy, energy-since, and rank-towers. In addition, the necessary support commands, help and quit, have been implemented. The system definitely accomplishes the useful task of providing energy and power consumption usage in the Hale Aloha Towers residence halls on the University of Hawaii at Manoa campus.
Review Question 2: Can an external user successfully install and use the system?
The home page, introduced above, provides a clear description of what the system is supposed to accomplish; however, the bold-faced commands are not explicitly stated as commands, their usage is not given, and there is no sample input and output. The User Guide wiki page, on the other hand, provides more details and is concise so that no user should have trouble downloading, installing, and executing the system. It also provides a link to the downloadable distribution, which was previously downloaded for Review Question 1, and the distribution does indeed provide an executable jar file in the top-level directory by the name of hale-aloha-cli-teams, which uses the proper naming convention for this project. Indeed, the zip file distribution contains major and minor version numbers stamped with the date and time in which the zip file was created.
Testing the system under valid inputs had already been accomplished for Review Question 1. To reiterate, here are the commands used for the test: current-power, daily-energy, energy-since, rank-towers, help, and quit. The following lists the towers tested: Ilima, Lehua, Lokelani, and Mokihana; the two lounges tested for each tower were A and E. The dates used for daily-energy and energy-since were 2011-11-30, while the start date used for rank-towers was 2011-11-30 and the end date used was 2011-12-01. The system responded well to the valid inputs provided by printing the data requested, but it could have handled invalid inputs better by prompting the user with a message (the program only did this sometimes). For example, it failed to handle the empty string "", in which no ">" cursor would show up, yet entering valid commands were still possible. It was the same result (the missing cursor) whenever the number of arguments was less than what was required, and nothing would print out at all if the number of arguments was more than required (but the cursor would at least be there to prompt the user again). Nevertheless, overall, yes, an external user can successfully install and use the system (downloading, unzipping, installing, and executing the system all takes less than a minute, in fact).
Review Question 3: Can an external developer successfully understand and enhance the system?
The Developer Guide wiki page does indeed provide clear and comprehensive instructions on building the system and even includes tips on IDPM, as well as the team's experience using it. Appearance-wise, it just would have been nice if the commands for steps 2.1 and 2.2 were placed onto the traditional light-gray background. Quality assurance tools such as Checkstyle, PMD, and FindBugs were used and mentioned in the Developer Guide, among other useful standards mentioned, such as using a specific format (a link is provided) in Eclipse. Since IDPM and Development Guidelines were given in great detail, a new developer can certainly ensure that any code they write shall adhere to such standards. And indeed, the guide provides a link to the Jenkins CI server associated with this project. In addition, JavaDoc generation is explained clearly as a subsection of its own on the Developer Guide.
After checking out the sources from SVN using the https URL indicated on this webpage, JavaDoc documentation was successfully generated through the command line by running the command ant -f javadoc.build.xml. The JavaDocs are certainly well-written and informative, provide a good understanding of Team Teams's architecture and the structuring of individual components. The names of the components certainly indicate their underlying purpose, and the system appears to be designed to support information hiding. The only lacking JavaDoc comments are for the default constructors of each class and the package private fields for the Processor class, but then, these fields are explained as in-line comments in the source code, and default constructors don't necessarily exist explicitly in the source code.
Next, the system was tested to build successfully from sources without errors by running the command ant -f verify.build.xml. Coverage information was also successfully generated using the command ant -f jacoco.build.xml. The JaCoCo coverage tool report stated that no tests in the Main and Processor packages were implemented but that 97% of the instructions and 82% of the branches were covered in the Command package. This is great, since the Command package is the meat of the project, but it would have been nice if the team had also provided JUnit test cases for the Main and Processor classes as well (although they do have a manual test class for the Processor class).
Unfortunately, whenever executing the class files from Eclipse, the error java.lang.UnsupportedClassVersionError would show up. I'm not sure if this is due to my Eclipse or the files in question. But as for visual inspection of the test cases in the test classes, overall, their test cases appear to cover nearly all Processor functionality, and there are multiple (five or greater) assert statements in each JUnit test class. In conclusion, the current set of test cases appear to prevent a new developer from making enhancements that would break pre-existing code.
As for the source code, coding standards appear to be followed and multiple but appropriate in-line comments as well as detailed JavaDocs are provided. The code is fairly easy to understand and the comments are just the right amount.
From the Issues page associated with this project, it is clear which of the three developers worked on what so that an external developer can use this page to determine who the best person to ask about a specific issue would be. As for gauging the equality of the dispersion of work, two developers appeared to do an equal amount of work and about half more work than the third developer.
Lastly, the Jenkins CI server associated with this project was examined. Apart from the known outages to the WattDepot server during November 21st to the 24th, there were no build failures. The project began on November 10th and ended on the 29th; overall, it was worked on in a consistent fashion, with only one day skipped a few times (meaning, the project never went for more than one to two days without being worked on). Moreover, by examining the Updates page, it appears that approximately 14 out of 49 revisions were not associated with an issue, meaning that only roughly 71% were, and this percentage is actually even less because quite a few of the revisions were not commits but were revisions to the wiki pages. This is not good, since at least 90% of the commits should be associated with issues.
In conclusion, a new external developer could successfully understand the current hale-aloha-cli-teams system with ease and therefore enhance it in future because of the well-written test cases, JavaDocs, in-line comments, and User/Developer Guide wiki pages. Furthermore, overall, Team Teams successfully accomplished the Three Prime Directives of Open Source Software Engineering.
Tuesday, November 29, 2011
Make No Assumptions: Software Engineering is Not Software Engineering Unless You've Been Through IDPM
The full impact of software engineering has not truly been gauged until now. It's multiple belts up from the white-belt WattDepot katas we previously tackled. It's the combination of our basic skills with WattDepot, our fleeting glimpse of Continuous Integration (CI), and our whirlwind introduction to Issue Driven Project Management (IDPM). And the most lethal ingredient of them all is teamwork, as implied by CI and IDPM. Lethal, but unavoidable. When working with others on a project, CI makes things a bit easier by assuring project team members that the system they are developing is in working condition at all times. The status of verification of the build is automated under a CI server such as Jenkins, which triggers an automatic verify test on the system in the event that a team member has recently committed changes to a Subversion (SVN) repository such as one hosted by Google Project Hosting. CI also requires team members to commit regularly, such as at least once every two days. Meanwhile, IDPM is designed to keep all project team members on task at all times by associating each task with a specific issue at hand, and ideally, issues should not take longer than a few days to complete. Successful IDPM requires frequent group interaction, always having another issue at hand after completing one, and general knowledge of where the project is headed at all times. In particular, Google Project Hosting (along with SVN and Jenkins) was used to drive IDPM; there, it is easy to see what tasks are (mainly, among other statuses) Accepted, Started, and Done by opening all issues under Grid view.
The project, which can be found at this website, was to create a command line interface program to understand various aspects of energy use in the Hale Aloha Towers (freshman dorms) of the University of Hawaii at Manoa. My team in particular consists of two other members. As in the real world, we did not get to choose who to work with. We had to choose a name for our team and, after mulling over a brainstorm-generated list, settled for "Pichu" (not necessarily after the Pokemon of the same name). We exchanged phone numbers and Skype usernames and had the project broken down into the important issues at hand, all in the first day the project was assigned to us.
Mentioned above, the dynamic trio of Google Project Hosting, SVN, and Jenkins was used to drive IDPM. Before each commit I made (except for one or a few times in which case I was positive the program would pass verify - but alas, lesson learned: never assume), I made sure to run verify on the system. Then, after a commit, I would check the associated Jenkins job to make sure that it built the system successfully. To address IDPM, after the disastrous surprise in-progress technical review, I made sure to associate each and every commit (known as a revision in Google Project Hosting) with the issues I addressed. To take it a step further, when I updated the issues themselves, I added links to the revisions that handled these issues.
Our team originally had the issues all planned out from the beginning, but as we finished our main issues, if we thought up more issues, those were simply added to the existing pool of issues. Because of IDPM and overall constant communication, we were able to gauge what everyone else was doing and hence finish the project in time. The basic program functionality has six commands; a description of each can be found on Team Pichu's home page. There were four main components to the program, namely: the Main class, the Processor class, classes that implement the Command interface, and JUnit test classes. Main contains the driver for the program, establishes connection to the WattDepot server, and creates instances of the Processor class for each command or invalid input entered by the user. Processor takes the input, parses it, and determines which Command-type class to call based upon the input; if an error arises with the input, then Processor returns the appropriate error message to Main. Each Command-type class must implement the Command interface's run, getOutput, and getHelp methods; in addition, each Command-type class should have a default constructor (in addition to constructors with parameters, if any). If the Command-type class takes one or more parameters to the constructor, then that constructor should throw an IllegalArgumentsException, and a checkArgs method should also be implemented. Lastly, each Command-type class should have an associated JUnit test class (Main and Processor have test classes as well).
Unfortunately, we were not able to implement Java Reflection, so most of the code is hardcoded. For example, if a Command-type class is added to the Command package, then changes to Processor and Help need to be made, and a corresponding JUnit test class needs to be created in the Test package. Hence, decreasing the number of places that code needs to be updated any time a new command or feature is added to the program is a future improvement that could be made. Overall, this experience has been completely meaningful and worthwhile because of the real-world experience of working with others in real-time to produce a single smooth and functioning program. Communication, which could certainly have been better, was definitely the key element to success, as well as consistently working on the project over a long period of time. I felt that both communication and consistency were quite challenging when one is out of sync with the other. Without constant feedback through communication, it becomes more difficult to gauge what each team member should be working on. Another critical learning experience is what to do in the unexpected event of some greater force negatively impacting the project at hand. The WattDepot server had crashed and there was a hard disk failure for the first time in about four years. Luckily, our group was able to step up to the plate and align our project with the newfound situation at hand. In conclusion, I feel that IDPM is much more difficult than it had first seemed; moreover, the overall quality of our software could have been better and less hardcoded.
The project, which can be found at this website, was to create a command line interface program to understand various aspects of energy use in the Hale Aloha Towers (freshman dorms) of the University of Hawaii at Manoa. My team in particular consists of two other members. As in the real world, we did not get to choose who to work with. We had to choose a name for our team and, after mulling over a brainstorm-generated list, settled for "Pichu" (not necessarily after the Pokemon of the same name). We exchanged phone numbers and Skype usernames and had the project broken down into the important issues at hand, all in the first day the project was assigned to us.
Mentioned above, the dynamic trio of Google Project Hosting, SVN, and Jenkins was used to drive IDPM. Before each commit I made (except for one or a few times in which case I was positive the program would pass verify - but alas, lesson learned: never assume), I made sure to run verify on the system. Then, after a commit, I would check the associated Jenkins job to make sure that it built the system successfully. To address IDPM, after the disastrous surprise in-progress technical review, I made sure to associate each and every commit (known as a revision in Google Project Hosting) with the issues I addressed. To take it a step further, when I updated the issues themselves, I added links to the revisions that handled these issues.
Our team originally had the issues all planned out from the beginning, but as we finished our main issues, if we thought up more issues, those were simply added to the existing pool of issues. Because of IDPM and overall constant communication, we were able to gauge what everyone else was doing and hence finish the project in time. The basic program functionality has six commands; a description of each can be found on Team Pichu's home page. There were four main components to the program, namely: the Main class, the Processor class, classes that implement the Command interface, and JUnit test classes. Main contains the driver for the program, establishes connection to the WattDepot server, and creates instances of the Processor class for each command or invalid input entered by the user. Processor takes the input, parses it, and determines which Command-type class to call based upon the input; if an error arises with the input, then Processor returns the appropriate error message to Main. Each Command-type class must implement the Command interface's run, getOutput, and getHelp methods; in addition, each Command-type class should have a default constructor (in addition to constructors with parameters, if any). If the Command-type class takes one or more parameters to the constructor, then that constructor should throw an IllegalArgumentsException, and a checkArgs method should also be implemented. Lastly, each Command-type class should have an associated JUnit test class (Main and Processor have test classes as well).
Unfortunately, we were not able to implement Java Reflection, so most of the code is hardcoded. For example, if a Command-type class is added to the Command package, then changes to Processor and Help need to be made, and a corresponding JUnit test class needs to be created in the Test package. Hence, decreasing the number of places that code needs to be updated any time a new command or feature is added to the program is a future improvement that could be made. Overall, this experience has been completely meaningful and worthwhile because of the real-world experience of working with others in real-time to produce a single smooth and functioning program. Communication, which could certainly have been better, was definitely the key element to success, as well as consistently working on the project over a long period of time. I felt that both communication and consistency were quite challenging when one is out of sync with the other. Without constant feedback through communication, it becomes more difficult to gauge what each team member should be working on. Another critical learning experience is what to do in the unexpected event of some greater force negatively impacting the project at hand. The WattDepot server had crashed and there was a hard disk failure for the first time in about four years. Luckily, our group was able to step up to the plate and align our project with the newfound situation at hand. In conclusion, I feel that IDPM is much more difficult than it had first seemed; moreover, the overall quality of our software could have been better and less hardcoded.
Wednesday, November 9, 2011
Energy Drained is Energy Gained
WattDepot: A one-stop shop for all your energy and power needs
Complementing the previous blog entry on energy in Hawaii, I had recently gained firsthand experience in manipulating the output of energy data, all from the comfort of my own home, using the WattDepot API. WattDepot is a web service that allows a host computer to connect as a client to an existing web server. This web server collects energy and power usage data stored in meters and saves the information in a database, which the WattDepot client can access. The meters I was concerned with are located in the Hale Aloha Towers (freshman dorms) on the University of Hawaii at Manoa campus. The sheer proximity of the meters only made this task of working with live energy and power data more relevant and meaningful.Not to say that it wasn't stressful.
Enter the WattDepot katas:
Kata 1: SourceListing
The task was to print out the names of each source of data (where the meters were installed) across from their respective descriptions. This kata was absolutely deceiving, only because it was so straightforward, given the wattdepot-simpleapp to start us off with. A simple copy-and-paste from the simple application, with minor tweaks to the format of the output, made this kata breeze by in about 5 minutes.Kata 2: SourceLatency
This seemed fairly straightforward: print out the sources and their latency values (in seconds). Another simple copy-and-paste from wattdepot-simpleapp got me off to a quick start. Wonderful, it printed out the latency values and the sources just fine. But not-so-wonderful, it didn't print them out by ascending order of latency value. I ended up taking the longest time, around 4 or 5 hours (not including the breaks in between), on this kata because of time spent searching for a suitable Java Collections class to use that would allow duplicate keys to map to different values. I finally figured out that Java doesn't have any built-in libraries that hold for this simple concept. While Googling, I discovered the Guava libraries, created and maintained by Google employees, who use it as their core libraries for Java-based projects. It took less than a minute to download the jar file and import it into my WattDepot project in Eclipse. Yes, now I could work with the Multimap interface, which allows duplicate keys to map to more than one value.Unfortunately, printing out the keys and values using a Multimap in the format required by this kata was not a task I wished to partake in, as it seemed to be too much trouble than it was worth; I felt the same way about making my own Collections class. In the end, it was the first proper Collections package that I wrote, CompactDiscCollection, that gave me inspiration to use a TreeSet data structure of String objects. The String value was a concatenation of latency and the source name. Yes, I would say this was hardcoding, as it relied on the fact that my latency values were 2 digits long, but it was quite the shortcut, alleviating much stress.
Kata 3: SourceHierarchy
The task was to output a text-based visual representation of the hierarchy of sources and their subsources. I probably spent the second-to-least amount of time on this kata, for about 1 to 2 hours. Browsing through the relevant topic on the class Google Group definitely helped. I, like others before me, wasn't sure whether or not to include certain sources as top-level sources, since some of them were already listed as subsources for other sources. Ultimately, I didn't list them to avoid redundancies in the output. The time-consuming (but not difficult) part of this kata was parsing all the information returned by certain WattDepot methods. But learning a little more about java.util.regex.Pattern was quite rewarding.Kata 4: EnergyYesterday
Here, the task was to retrieve all of yesterday's energy consumption data for each source. I believe this kata took me longer to complete than the previous one. Since it took such a while, I decided to leave in the code I originally wrote to acquire yesterday's year, month, and date (commented out, of course). Apparently, to my surprise, I didn't read the Calendar API closely enough because there already exists two methods that can acquire yesterday's date. Setting the start and end timestamps to 00:00:00.000 AM and 11:59:59.999 PM was a simple manipulation of Calendar objects, but getting to know how these objects mix with XMLGregorianCalendar objects and how timestamps actually work was quite a bit to tackle all at once. What frustrated me most was trying to figure out which of the 5 existing Tstamp.makeTimestamp() methods from the WattDepot API to use.Kata 5: HighestRecordedPowerYesterday
This kata was quite a change from the previous one, as we were tasked to calculate the highest power (not energy this time) for each source recorded the day before. The class Google Group and kata specification helped me determine that 15-minute time intervals for a total of 96 queries from the database would provide more accurate data. But there were 64 sources, so for each source to query the database 96 times, it took quite a while to produce the output, which was not at all fun and friendly. I had to move to a place with faster Internet speed and wait for a few minutes before acquiring decent output (yet not without red error messages from a catch clause due to miscellaneous errors). I believe I took about 2 hours on this. This kata was definitely more trouble than fun.Kata 6: MondayAverageEnergy
The last task was to compute and print the average energy consumption during the two previous Mondays for each source. It was simple enough to copy over the contents from Kata 4 and begin computing the offset value required for one Monday ago, relative to the current date. After that minor calculation, adding another 7 days to the offset did the trick for two Mondays ago. This probably took about an hour to complete. Like Kata 5, I had more trouble with acquiring the data for this, perhaps due to the fact that I was querying data from over a week ago.Conclusion
To be honest, the title of this blog post is the first thing that popped to mind. Yay, something catchy. Yay, it rhymes. But after getting this far in the post, I realized that the title is not without purpose, for indeed, energy drained (into these katas) is energy gained (I feel much, much more knowledgeable about Java APIs and programming lessons learned in general). I learned that the folks at Google are more awesome than they already are because they've recognized the deficiencies in the Java Collections API and have shared with the world their extensions to it. Despite this fact, I have also learned that no matter how many different types of collections are created by others in the world of open source, you will usually find that a certain promising-looking collection just doesn't meet your needs exactly as you wish. I have also practiced the art of lazy programming by taking advantage of String victims and Java TreeSet instead of going through the trouble of making a collection class to suit my needs. Hardcoding was my friend during this time of stress, as I was pressed beyond reasonable allowable time as it was. Regardless of it all, I have never been better off. Energy data manipulation is just plain cool. And something quite useful for the future of tomorrow.Tuesday, November 1, 2011
Small Steps for Clean Energy in Hawaii
Growing up in Hawaii, my parents had always warn me to stop playing video games or turn off electric devices because the electricity cost was so high. But it wasn't until today, thanks to these screencasts, that I discovered just how high that cost was: roughly three times as much as the mainland's, at $0.30 per kilowatt-hour (kWh). Leaving these devices on for most of the day is certainly not such a smart idea, but it's quite difficult to break out of the habit when one is a computer science major. But I realized if I want to play a part in the Hawaii Clean Energy Initiative, now is a better time to start than never.
The Hawaii Clean Energy Initiative was signed off by former Governor Linda Lingle in January of 2008 as a partnership between the state of Hawaii and the U.S. Department of Energy. It requires full participation and support from all Hawaii state residents if, together, we are to achieve 70% clean energy by the year 2030. 70% clean energy means that 40% of the projected energy in 2030 is generated from renewable resources and 30% of the projected energy is reduced by means of conservation and other efficiency measures.
Yet 40% renewable energy is just the start. What makes Hawaii an even better place to live apart from its unique geography, climate, and mix of people is that, unlike any other state, it has the potential to generate all of its power from renewable energy resources and has almost every source of renewable energy available. This includes wind, wave, solar, geothermal, and more. Renewable resources are generally more cost-effective for Hawaii, and because of its small size (in comparison to other states), Hawaii's energy needs are modest enough that all-renewable energy resources is possible.
Negative consequences loom ahead if we do not pick up our act now. Hawaii still generates most of its energy from imported oil, which is extremely expensive, as we know it, and the price is only likely to get even higher as the years go by. Furthermore, Hawaii as it is now uses inefficient means to generate its energy: each island has its own source or sources of energy so that not one is connected to another. These unconnected energy grids aren't what we'd call teamwork. If we don't do something about this, not just the price of gas and electricity would go up, but the prices of water, food, and clothing, etc., would all go up as well.
In conclusion, if Hawaii is to remain the dear place we call home, we must take it upon ourselves to conserve energy, as much as we can, on a daily basis. Nothing big was ever achieved in a giant leap. It's the small steps that we take each day that truly count in the end.
The Hawaii Clean Energy Initiative was signed off by former Governor Linda Lingle in January of 2008 as a partnership between the state of Hawaii and the U.S. Department of Energy. It requires full participation and support from all Hawaii state residents if, together, we are to achieve 70% clean energy by the year 2030. 70% clean energy means that 40% of the projected energy in 2030 is generated from renewable resources and 30% of the projected energy is reduced by means of conservation and other efficiency measures.
Yet 40% renewable energy is just the start. What makes Hawaii an even better place to live apart from its unique geography, climate, and mix of people is that, unlike any other state, it has the potential to generate all of its power from renewable energy resources and has almost every source of renewable energy available. This includes wind, wave, solar, geothermal, and more. Renewable resources are generally more cost-effective for Hawaii, and because of its small size (in comparison to other states), Hawaii's energy needs are modest enough that all-renewable energy resources is possible.
Negative consequences loom ahead if we do not pick up our act now. Hawaii still generates most of its energy from imported oil, which is extremely expensive, as we know it, and the price is only likely to get even higher as the years go by. Furthermore, Hawaii as it is now uses inefficient means to generate its energy: each island has its own source or sources of energy so that not one is connected to another. These unconnected energy grids aren't what we'd call teamwork. If we don't do something about this, not just the price of gas and electricity would go up, but the prices of water, food, and clothing, etc., would all go up as well.
In conclusion, if Hawaii is to remain the dear place we call home, we must take it upon ourselves to conserve energy, as much as we can, on a daily basis. Nothing big was ever achieved in a giant leap. It's the small steps that we take each day that truly count in the end.
Tuesday, October 25, 2011
The Art of Shining within Steel Confines
Freedom in America doesn't come cheap. You still have to follow the laws within society, which are as strict as steel, in order to be considered a good and civilized citizen. The same goes in the software engineering world. Most of you will find yourselves in a business that is not run by you. There is little freedom to code in whatever way you desire or to say no if a software review is requested of you when you have so many other matters of importance to attend to. Midway through this semester, I've learned that I've missed out on so much over the past two years. I thought that if I followed the norms and did all that was required of me in my classes, I'd be on the road to success. Apparently, that thinking had always been far off-key. The following questions and my answers to them reveal that shining within steel confines is an art, not a sequential computer program as I had thought it to be all along.
Standards and feedback are vital for the three prime directives of open source software. Standards are required to allow people to work together in a way that would eliminate unnecessary unexpected events and technical issues. With standards in place, people can worry less about doing the right thing and focus more on the quality of their communication with others. Feedback, on the other hand, is the key to improvement. Success in the software engineering world is not an end goal but constant improvement. With constant feedback, success can be achieved. While standards provide structure, feedback shapes personal and professional growth. Ways in which we receive feedback in class are from partners or group members, other class members, the professor, the designated mailing list, automated quality assurance tools such as Checkstyle or FindBugs, and the outside online community.
Your professional persona is how the world views you as a professional - a person with valued thoughts and valued skills. Here are the four ways in which professional persona is assessed in class:
1. professional portfolio - A place you can boast all about yourself (and only yourself). Do so in a professional manner, of course. This means no going into endless anecdotes about your personal life. Instead, highlight your industry-relevant skills and achievements. Detailing projects you have worked on is valuable, as it could give potential employers deeper insight into what you're actually capable of.
2. online engineering log - A blazing hub for your professional thoughts on industry-relevant tools and matters. This time, the focus is not on yourself but on those tools and matters. Write for the world, but keep in mind that your target audience is potential employers. They would use this log to assess your ability to communicate effectively through writing.
3. participation in professional social networks - A portal into valuable unending knowledge. This allows potential employers to witness firsthand how you contribute to industry-relevant discussions and how much you care about colleagues and others in the business. Furthermore, this will contribute to your personal and professional growth, giving you a broad range of views that you simply cannot obtain from just working with one or a few companies.
4. participation in open source projects - A concrete showcase of your expertise. Anyone could say they could write code, but the question in most employers' minds is, How well could you write code? Open source projects allow employers to examine actual code you've written and shared with the world. It also gives them an idea of how well you assimilate into a group mold: Do you follow your own standards or the standards of the existing code?
Participation in technical societies and activities would only further enhance your professional persona. Such technical societies include IEEE, ACM, SWE, and Honolulu Coders. Some technical activities are to chat in the IRC, compete in TopCoder competitions, and practice information security skills on wargaming websites such as SmashTheStack.org. Participating in any of these and more is a remarkable benefit because you could gain skills and knowledge that you would otherwise not gain from reading a textbook or taking a class. Such participation gives you a lot more to talk about during job interviews. In addition, if you win in a technical competition, you could add that to your resume and professional portfolio.
A software review is a vehicle for further growth of technical skills and knowledge. The ability to assess others' coding faults is crucial and helps you to better assess your own code. In addition, the software engineer's code you're reviewing may have functionality very much different from your own, even if the two of you are working on the same product. Therefore, in the long run, it's always best to understand at least the key functionality of the code you're assessing as well as where the vulnerabilities may lie so as not to repeat the same mistakes and gain a better understanding of the product as a whole. When conducting a "decent" software review, here are the three questions to ask yourself:
1. Is my review revealing problems?
Importance: A software review takes time away from writing more code. Why conduct a software review when it won't be accomplishing its purpose? Its purpose, in fact, is to reveal problems with someone else's code - problems they easily skip over or normally can't find on their own. On the job, it's a business because time is money. In general, make sure what you do is relevant and that you achieve the purpose. Otherwise, you're not helping the business, the other software engineer, the customers, or yourself.
2. Is my review increasing understanding?
Importance: What good is a software review when all you've written sounds like gibberish to its target audience - the software engineer who has written the code? All your efforts to improve the system would go to waste if you won't take a little extra time and effort to enhance the software engineer's understanding of their own code. In the process, you will only be helping yourself feel better about yourself for helping someone else and feel better in general for gaining deeper insight into the product under development.
3. Is my review doing (1) and (2) efficiently?
Importance: Yes, take a little extra time and effort - but not a whole lot! The key to "time is money" in a business is balance. There should always be a balance between every component of your role on a software development team. What good is a software review that takes two weeks to complete when a report summary of new updates to a system is required each week? Be able to use good judgment, common sense, and past mistakes to assess how long a particular review for particular pieces of code should take.
Part of the answer lies in the introduction to this post, in which there is little room for freedom when software engineers must conform to standards and follow norms. But of course, without it, chaos would ensue. As we all know, understanding and debugging another's code is quite different from doing so with your own. For the purposes of this class, one example of "the professor's way" is our use of the Eclipse IDE, which goes hand-in-hand with our use of Java. Eclipse is used mainly because it runs on multiple OS platforms, it is free open source, and it is easy to integrate its use with other tools (such as Ant, JUnit, and SVN). We code with Java, even though there are so many other useful languages, because it is one of the most supported languages today with a complete API and a smorgasbord of tools at its disposal. I can very well imagine if we were allowed to use other IDEs or languages in this class - we would certainly get less work accomplished and acquire software engineering skills and knowledge at a much slower rate. In relation to the real world, the chaos and destruction would be tenfold and much more serious and irreversible. Nevertheless, there's an art to shining within steel confines: Follow their way and never take it to scorn. If you learn what you can outside the edges, you're paving the highway and calling it your own. In time, others will recognize you for it.
1) What is the difference between standards and feedback? What are the many ways in which you receive feedback in this class?
Standards and feedback are vital for the three prime directives of open source software. Standards are required to allow people to work together in a way that would eliminate unnecessary unexpected events and technical issues. With standards in place, people can worry less about doing the right thing and focus more on the quality of their communication with others. Feedback, on the other hand, is the key to improvement. Success in the software engineering world is not an end goal but constant improvement. With constant feedback, success can be achieved. While standards provide structure, feedback shapes personal and professional growth. Ways in which we receive feedback in class are from partners or group members, other class members, the professor, the designated mailing list, automated quality assurance tools such as Checkstyle or FindBugs, and the outside online community.
2) List the four ways in which your professional persona is assessed in class and elaborate on each of them.
Your professional persona is how the world views you as a professional - a person with valued thoughts and valued skills. Here are the four ways in which professional persona is assessed in class:
1. professional portfolio - A place you can boast all about yourself (and only yourself). Do so in a professional manner, of course. This means no going into endless anecdotes about your personal life. Instead, highlight your industry-relevant skills and achievements. Detailing projects you have worked on is valuable, as it could give potential employers deeper insight into what you're actually capable of.
2. online engineering log - A blazing hub for your professional thoughts on industry-relevant tools and matters. This time, the focus is not on yourself but on those tools and matters. Write for the world, but keep in mind that your target audience is potential employers. They would use this log to assess your ability to communicate effectively through writing.
3. participation in professional social networks - A portal into valuable unending knowledge. This allows potential employers to witness firsthand how you contribute to industry-relevant discussions and how much you care about colleagues and others in the business. Furthermore, this will contribute to your personal and professional growth, giving you a broad range of views that you simply cannot obtain from just working with one or a few companies.
4. participation in open source projects - A concrete showcase of your expertise. Anyone could say they could write code, but the question in most employers' minds is, How well could you write code? Open source projects allow employers to examine actual code you've written and shared with the world. It also gives them an idea of how well you assimilate into a group mold: Do you follow your own standards or the standards of the existing code?
3) Name one way, outside of class, that you are encouraged to enhance your professional persona. How would this benefit you?
Participation in technical societies and activities would only further enhance your professional persona. Such technical societies include IEEE, ACM, SWE, and Honolulu Coders. Some technical activities are to chat in the IRC, compete in TopCoder competitions, and practice information security skills on wargaming websites such as SmashTheStack.org. Participating in any of these and more is a remarkable benefit because you could gain skills and knowledge that you would otherwise not gain from reading a textbook or taking a class. Such participation gives you a lot more to talk about during job interviews. In addition, if you win in a technical competition, you could add that to your resume and professional portfolio.
4) What are the three questions to ask yourself when conducting a software review? Why is each of these important?
A software review is a vehicle for further growth of technical skills and knowledge. The ability to assess others' coding faults is crucial and helps you to better assess your own code. In addition, the software engineer's code you're reviewing may have functionality very much different from your own, even if the two of you are working on the same product. Therefore, in the long run, it's always best to understand at least the key functionality of the code you're assessing as well as where the vulnerabilities may lie so as not to repeat the same mistakes and gain a better understanding of the product as a whole. When conducting a "decent" software review, here are the three questions to ask yourself:
1. Is my review revealing problems?
Importance: A software review takes time away from writing more code. Why conduct a software review when it won't be accomplishing its purpose? Its purpose, in fact, is to reveal problems with someone else's code - problems they easily skip over or normally can't find on their own. On the job, it's a business because time is money. In general, make sure what you do is relevant and that you achieve the purpose. Otherwise, you're not helping the business, the other software engineer, the customers, or yourself.
2. Is my review increasing understanding?
Importance: What good is a software review when all you've written sounds like gibberish to its target audience - the software engineer who has written the code? All your efforts to improve the system would go to waste if you won't take a little extra time and effort to enhance the software engineer's understanding of their own code. In the process, you will only be helping yourself feel better about yourself for helping someone else and feel better in general for gaining deeper insight into the product under development.
3. Is my review doing (1) and (2) efficiently?
Importance: Yes, take a little extra time and effort - but not a whole lot! The key to "time is money" in a business is balance. There should always be a balance between every component of your role on a software development team. What good is a software review that takes two weeks to complete when a report summary of new updates to a system is required each week? Be able to use good judgment, common sense, and past mistakes to assess how long a particular review for particular pieces of code should take.
5) Give some examples of why the saying, "It's [the professor's] way or the highway," is quite prevalent in this class and relate it to the real world.
Part of the answer lies in the introduction to this post, in which there is little room for freedom when software engineers must conform to standards and follow norms. But of course, without it, chaos would ensue. As we all know, understanding and debugging another's code is quite different from doing so with your own. For the purposes of this class, one example of "the professor's way" is our use of the Eclipse IDE, which goes hand-in-hand with our use of Java. Eclipse is used mainly because it runs on multiple OS platforms, it is free open source, and it is easy to integrate its use with other tools (such as Ant, JUnit, and SVN). We code with Java, even though there are so many other useful languages, because it is one of the most supported languages today with a complete API and a smorgasbord of tools at its disposal. I can very well imagine if we were allowed to use other IDEs or languages in this class - we would certainly get less work accomplished and acquire software engineering skills and knowledge at a much slower rate. In relation to the real world, the chaos and destruction would be tenfold and much more serious and irreversible. Nevertheless, there's an art to shining within steel confines: Follow their way and never take it to scorn. If you learn what you can outside the edges, you're paving the highway and calling it your own. In time, others will recognize you for it.
Thursday, October 20, 2011
Submersion into Subversion
These days, when we hear brand names such as Apple, Microsoft, and Google, we think of one software, one hardware, one product. We hardly stop to think of what lies past the name, past the "one." But in fact, it is a multitude of people who makes a product, not just one company. And they make a product in time for its launch date, continuing to provide updates and support afterwards. Let's focus on the software for now. How is it possible not to transform into some wild green-eyed lost soul due to millions of lines of code, written not just by you, but by others as well? The answer lies in configuration management.
Configuration management tracks the state or configuration of a system at any point in time so that changes made to the system are logged and different versions of the system, old or new, can be utilized for different purposes. Configuration management aims to solve the issues that arise when multiple programmers work on a single software product simultaneously. Even with simple compilation or verification issues, the quick fixes may not be apparent until hours or days into debugging. But with a configuration management tool, this could be avoided, since the previous version of the system, before the incriminating changes were made, could be downloaded.
Google Project Hosting and SmartSVN for Mac (TortoiseSVN for Windows) is an excellent duo for amateur developers to kickoff with configuration management. Add Robocode or a working project into the mix along with classmates or friends for committers, then you could host your very own project in no time. Including easily accessible User and Developer Guides is the standard and allows others to know that you intend to treat them and your work on a professional level. The checkout URL repository on Google Project Hosting appears under the Source tab, and Subversion (SmartSVN or TortoiseSVN) provides the host and committers with commit and add access. The Changes link under the Source tab and the Updates link under the Project Home tab display any changes made on the site.
Overall, this was quite a refreshing learning activity and was none too difficult at all. The only minor hurdle was the initial attempt to get my project files uploaded to the trunk directory of my robocode-bma-[nameofrobot] project. Otherwise, brief communication with my classmates and watching the screencasts made this an easy dive into configuration management.
Configuration management tracks the state or configuration of a system at any point in time so that changes made to the system are logged and different versions of the system, old or new, can be utilized for different purposes. Configuration management aims to solve the issues that arise when multiple programmers work on a single software product simultaneously. Even with simple compilation or verification issues, the quick fixes may not be apparent until hours or days into debugging. But with a configuration management tool, this could be avoided, since the previous version of the system, before the incriminating changes were made, could be downloaded.
Google Project Hosting and SmartSVN for Mac (TortoiseSVN for Windows) is an excellent duo for amateur developers to kickoff with configuration management. Add Robocode or a working project into the mix along with classmates or friends for committers, then you could host your very own project in no time. Including easily accessible User and Developer Guides is the standard and allows others to know that you intend to treat them and your work on a professional level. The checkout URL repository on Google Project Hosting appears under the Source tab, and Subversion (SmartSVN or TortoiseSVN) provides the host and committers with commit and add access. The Changes link under the Source tab and the Updates link under the Project Home tab display any changes made on the site.
Overall, this was quite a refreshing learning activity and was none too difficult at all. The only minor hurdle was the initial attempt to get my project files uploaded to the trunk directory of my robocode-bma-[nameofrobot] project. Otherwise, brief communication with my classmates and watching the screencasts made this an easy dive into configuration management.
Tuesday, October 11, 2011
A Prime Example of What It Takes to Not Win at the Robocode Tournament
1. Overview
We have all been recently promoted to a belt of the next higher level and have now been asked to compete in the once-per-course Robocode Tournament. Am I strong enough? Have I trained enough? Do I have what it takes? Well, it was time to take one step at a time and see for myself.
To start off, I used the Robocode Katas Assignment as a foundation upon which to build my competitive robot. Notably, I made use of HitWallEvent from Position02, movement from Position04 and Position05, and firing tactics from Boom03 and Boom04. Of course, bunching all this code up together in a single robot is not ideal. So I then focused my efforts on beating one sample robot at a time.
In an ideal world, my main objective would be to win the Robocode Tournament if I had had more time to work on this. However, lack of extra time was indeed the case, and so, my main objective was set to the minimal requirements: to defeat the eight sample robots, namely, SittingDuck, Walls, RamFire, SpinBot, Crazy, Fire, Corners, and Tracker.
2. Design
A. Movement
My robot's basic movement is that it always moves to the center at the start of a round, so as to maximize its chances of getting closer to the enemy robot, since the closer it is, the greater its firepower. The only other time my robot moves is when it dodges (moves ahead by 100 pixels), if the following two conditions are met: its distance from the enemy is greater than 200 and its energy is less than 40.
B. Targeting
In the run method, the radar constantly turns in the while true loop (but this only happens if the robot hasn't chosen a target yet). Targeting is calculated with a simple equation in the method trackTactic: my robot's heading minus its gun's heading plus the enemy's bearing. The gun should only turn towards the enemy if it is not already pointing at said enemy. For some reason, however, my robot still turns its gun at times even though it has been constantly firing at the enemy who has been standing in the same location.
C. Firing
An enemy robot is fired at once it is detected. My robot's firepower has strengths 1, 2, and 3; the closer the enemy is, the weaker the firepower, and the farther the enemy is, the stronger the firepower.
Beating one robot at a time:
R1. SittingDuck
A no-brainer. This robot doesn't do anything, so there's no strategy required in beating it.
R2. Fire
My code for beating Fire certainly isn't the most efficient implementation. But it works. Fire helped the logic in constructing the onHitByBullet method, which triggers a HitByBulletEvent if the robot is hit by a bullet. The bearing of the bullet is the direction in degrees relative to my robot's heading. Basically, if the robot is not facing the bullet dead on, just move forward by a few pixels. But of course, if the bullet is hitting the robot in the face, turn right by a few degrees before moving forward.
R3. RamFire
While the Fire sample robot centers on the onHitByBullet method, RamFire centers on the onHitRobot method, since that is exactly what RamFire does: hits the enemy robot. I tried firing twice with a maximum power of 3 and to my surprise, it beat RamFire a few times. However, I wasn't satisfied with two lines of code, so I had my robot fire only if its energy was greater than or equal to the enemy's. If my robot's energy was less, then it should back away. However, this tactic did not work well at all, since the extra time taken in moving backwards allowed RamFire to move forward in the same amount, corner my robot into a wall, and beat it to a pulp each time. In the end, I stuck with two lines of code that at least beat RamFire sometimes instead of never.
R4. Crazy
Crazy moves around a whole lot, which is good for evasion but not exactly a great evasion tactic, especially since my robot was able to win most of the time against it. I used a combination of two robot katas (namely, Boom03 and Boom04) to win against Crazy. The overall strategy is, the further away Crazy is, it's still good to try and shoot at it but with minimal firepower, since the majority of the time, my robot missed like crazy. And of course, stronger firepower was the way to go when crazy was nearer to my robot; it was the only time for redemption. Even though missing was still a high percentage here, getting in a few good hits was worth it.
R5. Corners
The tactics that Corners uses are simple but not quite strategic enough: pick any of the four corners, stay there, and shoot at the enemy. To beat Corners, I focused on the onHitByBullet method, since my robot seemed to be getting hit quite a bit. I noticed, however, that Corners only uses a firepower of 1. So instead of dodging all the time whenever my robot got hit by a bullet, I only made it dodge if its energy was less than 50 (half of the initial energy, which is 100) or if the bullet that hit it had a firepower greater than 1. Again, this is not the best strategy, but it beat Corners on more than one occasion.
R6. Tracker
Unfortunately, my onHitByBullet method worked for Corners but not for Tracker. Tracker follows the enemy and fires with maximum firepower of 3 at all times. Because Tracker always ended up close to my robot. This meant that even if my robot had more health and fired with maximum firepower at Tracker, Tracker would win because my robot would begin to dodge if its health was less than 50. This wasn't a good idea at all, as it always made my robot lose. So in the end, I created a boolean variable called "dodge" and initially set it to false, only to be true under certain conditions. In Tracker's case, dodge is set to true only if the distance between Tracker and my robot is greater than 200 pixels and my robot's energy is less than 40. But what really got my robot to beat Tracker was to change the requirement for maximum firepower of 3. By being more lenient on the requirement (the distance between my robot and the enemy robot is less than or equal to 300 pixels instead of 100), my robot had the upper hand on Tracker.
R7. Walls
I tried what I could against Walls, including using an onBulletMissed method to increase an integer count on the number of missed bullets my robot would fire. If this was above a certain number in the onScannedRobot method, then this would trigger a method to move my robot to a corner of the map. Unfortunately, before my robot ever got close enough to block Walls within its path, it often became an open target for Walls. I ended up having to delete anything to do with Walls. At this point, I don't think my robot can ever beat walls.
R8. SpinBot
At this point, my robot had enough basic skills to be capable of beating SpinBot on very minor occasions. The spawning location at the start of a round was one determining factor, as well as my robot's heading relative to SpinBot's overall position as he travels in circles. But in short, SpinBot was the victor.
3. Results
I used the two JUnit test classes for Corners and Fire to run 100 battles against each sample robot and calculated what percentage of the battles my robot won.
Sample robots that my robot can reliably beat:
- SittingDuck (100/100, 100%)
- Crazy (75/100, 75%)
- Fire (85/100, 85%)
- Corners (97/100, 97%)
- Tracker (85/100, 85%)
- RamFire (74/100, 74%)
Sample robots that my robot cannot reliably beat:
- Walls (0/100, 0%)
- SpinBot (7/100, 7%)
The detailed explanation of why my design worked for the majority and yet didn't quite work for some is located under the "Design" section, entitled "Beating one robot at a time."
In order to do better, I would improve my design by examining even more cases and setting more boolean variables or running counts on certain items, such as the number of times its bullets miss or the number of times it gets hit by a bullet. If the robot is in a certain state, then a particular tactic would be used instead.
4. Testing
I wrote two acceptance tests, two behavioral tests, and two unit test. The two acceptance tests simply test whether or not my robot beats Corners and Fire. The two behavioral tests test my robots movement and firing (there wasn't much strategy to targeting, so I left that part out). Lastly, the three unit tests checked to see whether or not my robot has dodged at least once in an entire battle and whether or not my robot has fired twice upon colliding into an enemy robot. Both JaCoCo and JUnit run successfully.
5. Lessons Learned
In future, instead of focusing all my energy into coding for one-on-one matches, I would develop strategies for matches consisting of more than two robots. In regards to software engineering, I have learned (from this project) that it is helpful to write about what you do along the way instead of writing about it after you are finished. This also minimizes error. From a software development perspective, I would definitely give myself more time to work on the project.
We have all been recently promoted to a belt of the next higher level and have now been asked to compete in the once-per-course Robocode Tournament. Am I strong enough? Have I trained enough? Do I have what it takes? Well, it was time to take one step at a time and see for myself.
To start off, I used the Robocode Katas Assignment as a foundation upon which to build my competitive robot. Notably, I made use of HitWallEvent from Position02, movement from Position04 and Position05, and firing tactics from Boom03 and Boom04. Of course, bunching all this code up together in a single robot is not ideal. So I then focused my efforts on beating one sample robot at a time.
In an ideal world, my main objective would be to win the Robocode Tournament if I had had more time to work on this. However, lack of extra time was indeed the case, and so, my main objective was set to the minimal requirements: to defeat the eight sample robots, namely, SittingDuck, Walls, RamFire, SpinBot, Crazy, Fire, Corners, and Tracker.
2. Design
A. Movement
My robot's basic movement is that it always moves to the center at the start of a round, so as to maximize its chances of getting closer to the enemy robot, since the closer it is, the greater its firepower. The only other time my robot moves is when it dodges (moves ahead by 100 pixels), if the following two conditions are met: its distance from the enemy is greater than 200 and its energy is less than 40.
B. Targeting
In the run method, the radar constantly turns in the while true loop (but this only happens if the robot hasn't chosen a target yet). Targeting is calculated with a simple equation in the method trackTactic: my robot's heading minus its gun's heading plus the enemy's bearing. The gun should only turn towards the enemy if it is not already pointing at said enemy. For some reason, however, my robot still turns its gun at times even though it has been constantly firing at the enemy who has been standing in the same location.
C. Firing
An enemy robot is fired at once it is detected. My robot's firepower has strengths 1, 2, and 3; the closer the enemy is, the weaker the firepower, and the farther the enemy is, the stronger the firepower.
Beating one robot at a time:
R1. SittingDuck
A no-brainer. This robot doesn't do anything, so there's no strategy required in beating it.
R2. Fire
My code for beating Fire certainly isn't the most efficient implementation. But it works. Fire helped the logic in constructing the onHitByBullet method, which triggers a HitByBulletEvent if the robot is hit by a bullet. The bearing of the bullet is the direction in degrees relative to my robot's heading. Basically, if the robot is not facing the bullet dead on, just move forward by a few pixels. But of course, if the bullet is hitting the robot in the face, turn right by a few degrees before moving forward.
R3. RamFire
While the Fire sample robot centers on the onHitByBullet method, RamFire centers on the onHitRobot method, since that is exactly what RamFire does: hits the enemy robot. I tried firing twice with a maximum power of 3 and to my surprise, it beat RamFire a few times. However, I wasn't satisfied with two lines of code, so I had my robot fire only if its energy was greater than or equal to the enemy's. If my robot's energy was less, then it should back away. However, this tactic did not work well at all, since the extra time taken in moving backwards allowed RamFire to move forward in the same amount, corner my robot into a wall, and beat it to a pulp each time. In the end, I stuck with two lines of code that at least beat RamFire sometimes instead of never.
R4. Crazy
Crazy moves around a whole lot, which is good for evasion but not exactly a great evasion tactic, especially since my robot was able to win most of the time against it. I used a combination of two robot katas (namely, Boom03 and Boom04) to win against Crazy. The overall strategy is, the further away Crazy is, it's still good to try and shoot at it but with minimal firepower, since the majority of the time, my robot missed like crazy. And of course, stronger firepower was the way to go when crazy was nearer to my robot; it was the only time for redemption. Even though missing was still a high percentage here, getting in a few good hits was worth it.
R5. Corners
The tactics that Corners uses are simple but not quite strategic enough: pick any of the four corners, stay there, and shoot at the enemy. To beat Corners, I focused on the onHitByBullet method, since my robot seemed to be getting hit quite a bit. I noticed, however, that Corners only uses a firepower of 1. So instead of dodging all the time whenever my robot got hit by a bullet, I only made it dodge if its energy was less than 50 (half of the initial energy, which is 100) or if the bullet that hit it had a firepower greater than 1. Again, this is not the best strategy, but it beat Corners on more than one occasion.
R6. Tracker
Unfortunately, my onHitByBullet method worked for Corners but not for Tracker. Tracker follows the enemy and fires with maximum firepower of 3 at all times. Because Tracker always ended up close to my robot. This meant that even if my robot had more health and fired with maximum firepower at Tracker, Tracker would win because my robot would begin to dodge if its health was less than 50. This wasn't a good idea at all, as it always made my robot lose. So in the end, I created a boolean variable called "dodge" and initially set it to false, only to be true under certain conditions. In Tracker's case, dodge is set to true only if the distance between Tracker and my robot is greater than 200 pixels and my robot's energy is less than 40. But what really got my robot to beat Tracker was to change the requirement for maximum firepower of 3. By being more lenient on the requirement (the distance between my robot and the enemy robot is less than or equal to 300 pixels instead of 100), my robot had the upper hand on Tracker.
R7. Walls
I tried what I could against Walls, including using an onBulletMissed method to increase an integer count on the number of missed bullets my robot would fire. If this was above a certain number in the onScannedRobot method, then this would trigger a method to move my robot to a corner of the map. Unfortunately, before my robot ever got close enough to block Walls within its path, it often became an open target for Walls. I ended up having to delete anything to do with Walls. At this point, I don't think my robot can ever beat walls.
R8. SpinBot
At this point, my robot had enough basic skills to be capable of beating SpinBot on very minor occasions. The spawning location at the start of a round was one determining factor, as well as my robot's heading relative to SpinBot's overall position as he travels in circles. But in short, SpinBot was the victor.
3. Results
I used the two JUnit test classes for Corners and Fire to run 100 battles against each sample robot and calculated what percentage of the battles my robot won.
Sample robots that my robot can reliably beat:
- SittingDuck (100/100, 100%)
- Crazy (75/100, 75%)
- Fire (85/100, 85%)
- Corners (97/100, 97%)
- Tracker (85/100, 85%)
- RamFire (74/100, 74%)
Sample robots that my robot cannot reliably beat:
- Walls (0/100, 0%)
- SpinBot (7/100, 7%)
The detailed explanation of why my design worked for the majority and yet didn't quite work for some is located under the "Design" section, entitled "Beating one robot at a time."
In order to do better, I would improve my design by examining even more cases and setting more boolean variables or running counts on certain items, such as the number of times its bullets miss or the number of times it gets hit by a bullet. If the robot is in a certain state, then a particular tactic would be used instead.
4. Testing
I wrote two acceptance tests, two behavioral tests, and two unit test. The two acceptance tests simply test whether or not my robot beats Corners and Fire. The two behavioral tests test my robots movement and firing (there wasn't much strategy to targeting, so I left that part out). Lastly, the three unit tests checked to see whether or not my robot has dodged at least once in an entire battle and whether or not my robot has fired twice upon colliding into an enemy robot. Both JaCoCo and JUnit run successfully.
5. Lessons Learned
In future, instead of focusing all my energy into coding for one-on-one matches, I would develop strategies for matches consisting of more than two robots. In regards to software engineering, I have learned (from this project) that it is helpful to write about what you do along the way instead of writing about it after you are finished. This also minimizes error. From a software development perspective, I would definitely give myself more time to work on the project.
Wednesday, September 28, 2011
Manual Automation Katas
Early in the 20th century, America moved from hands-on factory labor to newer and better automated processes - a much-needed match for America's growing population and therefore, growing demands for supplies. This same concept applies to software. As software packages grew in the number of lines of code, the need arose for automated processes. Enter scripting languages such as Make and build systems such as Apache Ant. Apache Ant or Ant (an acronym for "another neat tool") is a software build system tool that allows for the compilation, execution, testing, and documentation of Java and C/C++ applications. Ant works in conjunction with Ivy. Ivy downloads and maintains the libraries required for software projects. Ant is supposedly easy to learn, especially if the "kata" principle - the repeated practice of form - from martial arts is applied.
1st Kata: Ant Hello World
If there's one thing a software developer will never say goodbye to, it's hello. <\echo\> (imagine that the backslashes aren't there; the same goes for the rest of the tags in this post) is to Ant as "System.out.println()" is to Java. Hello World . . . Hello, My First Ant Build Script. A cinch, a cinch, yes it was.
2nd Kata: Ant Immutable Properties
It's integral that Ant, part dependency manager, should have immutable properties (defined by <\property\> tags), as demonstrated by this kata in which I defined two properties of the same name, assigned them different values (first "1" and then "2"), and printed out the value of the property. Any guesses as to which value was "echoed" to screen?
3rd Kata: Ant Dependencies
Ant's dependency management operates in a very straightforward manner: <\target\> is to be executed only if the "depends" attribute is fulfilled. And if "depends" does not exist, then <\target\> is free to run. In other words: IF "depends" is fulfilled, THEN <\target\> executes. <\project\> is to Ant as "class" is to Java. The attribute of <\project\> called "default" specifies the main <\target\> that a build file should run. This dependency management was demonstrated by implementing the following:
*foo should depend upon bar.
*bar should depend upon baz and elmo in that order.
*baz depends upon qux.
*qux depends upon elmo.
*elmo has no dependencies.
Before running the build script, I had to guess in what order the targets were called, given that "foo" is the default target. I guessed the following: elmo, qux, baz, bar, foo. Right? Yes. Why? foo can't run without bar, and bar can't run without baz and elmo, but baz can't run without qux, which relies on elmo. elmo is the only one without dependencies, so it executes first. qux runs next, allowing baz to run after it. Now that baz and elmo have been executed, bar runs. Lastly, foo runs (it had been waiting on bar this whole time).
The second task was to change the script and make elmo depend upon bar. I had an idea as to what would happen, but I wasn't so sure what would print out. As wonderful as it could ever get, Ant knew exactly what went wrong: circular dependency, since bar ends up depending upon itself.
4th Kata: Hello Ant Compilation
A HelloAnt.java file that prints "Hello Ant" to screen? Easy. Now getting the build file to succeed was all a matter of examining the existing Ant code and modeling my "compile" <\target\> after it. With the use of the <\mkdir\> and <\javac\> tags, I was able to successfully automate the compilation of a Java file.
5th Kata: Hello Ant Execution
This kata was the trickiest so far; I couldn't find an example in the Ant build files I downloaded as part of the Ant package, so I had to search this one up. There were a few unsuccessful executions (using Terminal on a Mac), but I was able to determine that "classname" (name of the Java class, excluding the .class extension) and "classpath" (the location of the Java class file) were the only <\java\> attributes needed. I had success compiling when HelloAnt.java was in the default package, but after moving it to the edu.hawaii.ics314 package, it simply would not compile. Thanks to Jordan Takayama's's blog, I found out that the "classname" attribute should be set to "edu.hawaii.ics314.HelloAnt" and the "classpath" attribute should be set to the "classes" directory. Lastly, this was the first kata that made use of the <\import\> tag (the "compile" build script had to be imported), which is very similar to the #include directive of C/C++.
6th Kata: Hello Ant Documentation
Generating JavaDocs by entering a single command has never been made easier. However, before I could get this to work, I had to create an "overview.html" file (in the "src" folder but outside of the package) and a "package.html" file (in the package). I then made sure that any properties used in the <\javadoc\> tag were specified in my build system before I ran the script file to generate the JavaDocs.
7th Kata: Cleaning Hello Ant
This was pretty straightforward. Again, I modeled my "clean" target after already-existing Ant code. A simple <\delete\> tag did the trick and deletes the "build" directory if "clean" is listed as a dependency.
8th Kata: Packaging Hello Ant
This kata was extremely frustrating at first because I could not for the life of me get my zip file to unzip into a single directory. I tried and tried it my way, trying to ignore existing code in the hopes that it would work for me with a few lines of code. In the end, I had to look at the existing code (from PMJ-DaCruzer's dist.build.xml file) and model my code after that. I learned that it is perhaps impossible to unzip files into a single folder without copying files to a directory within a temporary directory first.
Figure 1: One simple command (ant -f dist.helloant.build.xml) for distribution
In conclusion, Apache Ant is a very powerful tool, designed to make the distribution of a Java project more user-friendly and efficient. Build systems help us automate distribution, proving to be a bit troublesome when one attempts to go against the flow, but once one learns to adapt to it, the raging river becomes easier to navigate.
1st Kata: Ant Hello World
If there's one thing a software developer will never say goodbye to, it's hello. <\echo\> (imagine that the backslashes aren't there; the same goes for the rest of the tags in this post) is to Ant as "System.out.println()" is to Java. Hello World . . . Hello, My First Ant Build Script. A cinch, a cinch, yes it was.
2nd Kata: Ant Immutable Properties
It's integral that Ant, part dependency manager, should have immutable properties (defined by <\property\> tags), as demonstrated by this kata in which I defined two properties of the same name, assigned them different values (first "1" and then "2"), and printed out the value of the property. Any guesses as to which value was "echoed" to screen?
3rd Kata: Ant Dependencies
Ant's dependency management operates in a very straightforward manner: <\target\> is to be executed only if the "depends" attribute is fulfilled. And if "depends" does not exist, then <\target\> is free to run. In other words: IF "depends" is fulfilled, THEN <\target\> executes. <\project\> is to Ant as "class" is to Java. The attribute of <\project\> called "default" specifies the main <\target\> that a build file should run. This dependency management was demonstrated by implementing the following:
*foo should depend upon bar.
*bar should depend upon baz and elmo in that order.
*baz depends upon qux.
*qux depends upon elmo.
*elmo has no dependencies.
Before running the build script, I had to guess in what order the targets were called, given that "foo" is the default target. I guessed the following: elmo, qux, baz, bar, foo. Right? Yes. Why? foo can't run without bar, and bar can't run without baz and elmo, but baz can't run without qux, which relies on elmo. elmo is the only one without dependencies, so it executes first. qux runs next, allowing baz to run after it. Now that baz and elmo have been executed, bar runs. Lastly, foo runs (it had been waiting on bar this whole time).
The second task was to change the script and make elmo depend upon bar. I had an idea as to what would happen, but I wasn't so sure what would print out. As wonderful as it could ever get, Ant knew exactly what went wrong: circular dependency, since bar ends up depending upon itself.
4th Kata: Hello Ant Compilation
A HelloAnt.java file that prints "Hello Ant" to screen? Easy. Now getting the build file to succeed was all a matter of examining the existing Ant code and modeling my "compile" <\target\> after it. With the use of the <\mkdir\> and <\javac\> tags, I was able to successfully automate the compilation of a Java file.
5th Kata: Hello Ant Execution
This kata was the trickiest so far; I couldn't find an example in the Ant build files I downloaded as part of the Ant package, so I had to search this one up. There were a few unsuccessful executions (using Terminal on a Mac), but I was able to determine that "classname" (name of the Java class, excluding the .class extension) and "classpath" (the location of the Java class file) were the only <\java\> attributes needed. I had success compiling when HelloAnt.java was in the default package, but after moving it to the edu.hawaii.ics314 package, it simply would not compile. Thanks to Jordan Takayama's's blog, I found out that the "classname" attribute should be set to "edu.hawaii.ics314.HelloAnt" and the "classpath" attribute should be set to the "classes" directory. Lastly, this was the first kata that made use of the <\import\> tag (the "compile" build script had to be imported), which is very similar to the #include directive of C/C++.
6th Kata: Hello Ant Documentation
Generating JavaDocs by entering a single command has never been made easier. However, before I could get this to work, I had to create an "overview.html" file (in the "src" folder but outside of the package) and a "package.html" file (in the package). I then made sure that any properties used in the <\javadoc\> tag were specified in my build system before I ran the script file to generate the JavaDocs.
7th Kata: Cleaning Hello Ant
This was pretty straightforward. Again, I modeled my "clean" target after already-existing Ant code. A simple <\delete\> tag did the trick and deletes the "build" directory if "clean" is listed as a dependency.
8th Kata: Packaging Hello Ant
This kata was extremely frustrating at first because I could not for the life of me get my zip file to unzip into a single directory. I tried and tried it my way, trying to ignore existing code in the hopes that it would work for me with a few lines of code. In the end, I had to look at the existing code (from PMJ-DaCruzer's dist.build.xml file) and model my code after that. I learned that it is perhaps impossible to unzip files into a single folder without copying files to a directory within a temporary directory first.
Figure 1: One simple command (ant -f dist.helloant.build.xml) for distribution
In conclusion, Apache Ant is a very powerful tool, designed to make the distribution of a Java project more user-friendly and efficient. Build systems help us automate distribution, proving to be a bit troublesome when one attempts to go against the flow, but once one learns to adapt to it, the raging river becomes easier to navigate.
Tuesday, September 20, 2011
Mixing Martial Arts with Robots and Coding
So how does one mix martial arts with programming? It's simple: you take the general principles from one and apply them to the other. The general principle in question that is of particular interest to programmers is called the "coding kata." In martial arts, basic form is practiced over and over in simple, repetitive movements called katas. It's no wonder martial arts can appear as graceful as professional ballet and figure skating. This same principle of katas can be applied to coding and is especially useful when one is learning to program in a new language or learning the workings of an open source project.
And the robots come marching in . . . in the form of the renowned Robocode open source project, initiated by former IBM employee Matthew Nelson. With Robocode, one could code their own robots from less than a minute to days on end, depending upon the level of competitiveness the robot is intended to have. The premise for Robocode, inspired by Nelson, is that coding games in Java could be fun and efficient. Indeed, it wasn't - at least, not at first.
Two years ago, I wanted to join a Robocode competition but didn't know how to begin, since I was a Java newbie. If only I had learned about the coding kata back then, then perhaps I would have been able to code at least one simple robot. Indeed, this coding kata is a life-changer.
Luckily, I was given the chance to work on Robocode again this year. The best way to build a competitive robot is definitely to start from the basics, as the Karate Kid films show. I learned, to my surprise, that coding robots was not as difficult as I had made them out to be. It was fun learning bits and pieces of trigonometry all over again. What caught my attention was that robots are like little kids - they won't know when to get out of the way of other robots, just as little kids won't know to look left and right before crossing a street.
Overall, I feel good about the assignment (having implemented all behaviors; I just don't feel too good about the commenting) - it was a treat, in fact, and didn't feel like homework. Coding robots is fun, especially when you're learning from others as well, all practicing the same coding form in order to achieve greater things, such as competitive robots, later down the road.
And the robots come marching in . . . in the form of the renowned Robocode open source project, initiated by former IBM employee Matthew Nelson. With Robocode, one could code their own robots from less than a minute to days on end, depending upon the level of competitiveness the robot is intended to have. The premise for Robocode, inspired by Nelson, is that coding games in Java could be fun and efficient. Indeed, it wasn't - at least, not at first.
Two years ago, I wanted to join a Robocode competition but didn't know how to begin, since I was a Java newbie. If only I had learned about the coding kata back then, then perhaps I would have been able to code at least one simple robot. Indeed, this coding kata is a life-changer.
Luckily, I was given the chance to work on Robocode again this year. The best way to build a competitive robot is definitely to start from the basics, as the Karate Kid films show. I learned, to my surprise, that coding robots was not as difficult as I had made them out to be. It was fun learning bits and pieces of trigonometry all over again. What caught my attention was that robots are like little kids - they won't know when to get out of the way of other robots, just as little kids won't know to look left and right before crossing a street.
Overall, I feel good about the assignment (having implemented all behaviors; I just don't feel too good about the commenting) - it was a treat, in fact, and didn't feel like homework. Coding robots is fun, especially when you're learning from others as well, all practicing the same coding form in order to achieve greater things, such as competitive robots, later down the road.
Tuesday, August 30, 2011
FizzBuzz Java Implementation Analysis in Eclipse
It took me approximately 25 minutes to implement (with good citizenship) and verify (line-by-line) the FizzBuzz program. Summary: Rough implementation - 10 minutes; Code-polishing - 15 minutes
Before our first lecture class for Software Engineering, I have not programmed in Java for months. Given this task to implement FizzBuzz in Java, I took about 10 minutes to code FizzBuzz off the top of my head (and partially from what I could remember from last week) and run it with an at-a-glance "passed" for verification of correct output. However, at this 10-minute mark, I knew I had yet to appropriately verify my code, figure out how to place it in the proper Java package (edu.hawaii.ics314, in this case), and comment my code for good programmer citizenship's sake. With fellow classmate Jason Yeo's help, I discovered that I had always been missing one simple step to placing a Java source file in a custom package, and that step is to specify a name for the "Package" text field upon creation of a new Java class. As can be seen in the following figure, I had multiple packages before I made the right one (and later got rid of the others):
Figure 1: Multiple packages before creating the right one
Commenting each subsection of the code took only a few minutes. It was the line-by-line verification of the correct output that comprised the bulk of the 15 extra minutes I spent to polish my code. Each of the "FizzBuzz" and "Buzz" outputs were a given, since multiples of 15 and 5 are hard to miss. In order to verify the "Fizz" multiples of 3, I mentally added each multiple's digits and divided the resulting sum by 3. In the near future, I shall learn to use JUnit to test my code instead of this human error-prone line-by-line verification method.
In conclusion, I learned that writing the code itself is a downhill slide. It's the uphill climb - the polishing of code and learning of new concepts to polish it even better - that takes more time, but in the end, it matters most.
Figure 2: The proper package, carefully-constructed comments, and line-by-line verification
Figure 3: My Java Implementation of the FizzBuzz Program
Before our first lecture class for Software Engineering, I have not programmed in Java for months. Given this task to implement FizzBuzz in Java, I took about 10 minutes to code FizzBuzz off the top of my head (and partially from what I could remember from last week) and run it with an at-a-glance "passed" for verification of correct output. However, at this 10-minute mark, I knew I had yet to appropriately verify my code, figure out how to place it in the proper Java package (edu.hawaii.ics314, in this case), and comment my code for good programmer citizenship's sake. With fellow classmate Jason Yeo's help, I discovered that I had always been missing one simple step to placing a Java source file in a custom package, and that step is to specify a name for the "Package" text field upon creation of a new Java class. As can be seen in the following figure, I had multiple packages before I made the right one (and later got rid of the others):
Figure 1: Multiple packages before creating the right one
Commenting each subsection of the code took only a few minutes. It was the line-by-line verification of the correct output that comprised the bulk of the 15 extra minutes I spent to polish my code. Each of the "FizzBuzz" and "Buzz" outputs were a given, since multiples of 15 and 5 are hard to miss. In order to verify the "Fizz" multiples of 3, I mentally added each multiple's digits and divided the resulting sum by 3. In the near future, I shall learn to use JUnit to test my code instead of this human error-prone line-by-line verification method.
In conclusion, I learned that writing the code itself is a downhill slide. It's the uphill climb - the polishing of code and learning of new concepts to polish it even better - that takes more time, but in the end, it matters most.
Figure 2: The proper package, carefully-constructed comments, and line-by-line verification
Figure 3: My Java Implementation of the FizzBuzz Program
Sunday, August 28, 2011
Logisim: An Essential for Every ICS 331 Student
An Assessment of Open Source Software against the Three Prime Directives for Open Source Software Engineering
Logisim is exactly as its name implies: a simulation of logic - of digital logic circuits, to be more specific. Its main function is to serve as a user-friendly interface for designing, building, and simulating digital logic circuits. More information can be found at its SourceForge.net website and Project Home website. In reference to Dr. Philip Johnson's Three Prime Directives, my assessment of Logisim is as follows.
In less than five minutes, I was able to discover Logisim's most basic function without reading any tutorials by replicating a XOR circuit consisting entirely of NAND gates, as shown in the image below. After accessing the "Help" menu, I discovered the feature of building circuits automatically using input from a truth table, which is a useful function for building especially complex circuits. This program also features hierarchical circuit design, or the ability to use saved/already-existing circuits as portions of other circuits. In short, this system, without a doubt, successfully accomplishes a useful task (and more).Figure 2: The "Help" Documentation
It took about a minute or so to download, install, and run the program (the latter accomplished by a simple double-click on the OR gate icon labeled "Logisim"). For download and installation support, straightforward and concise "Getting Logisim" directions are found on the Project Home website, upon clicking the "Download" link on the left panel.
As mentioned above in the first prime directive, the interface is simple enough in design that the average user can construct a circuit in under five minutes. Click once (but do not hold down) on a logic component icon, such as a gate, move the mouse pointer over to the dotted area, position the icon, and place it down by clicking once more. Hovering the mouse pointer over the blue markings behind a gate, for example, creates a momentary green circle around the specific marking. This circle indicates a point from which the user can click, hold, and drag a solid line from one icon to another. The entire contents of the "Help" documentaion for Logisim can be found offline on the "Help" toolbar within the program or online at the developer's website. In short, this system is extremely easy to successfully install and use.
At Logisim's "Develop" page on SourceForge.net, clicking the "browse code" link under "Repositories" displays a page that shows the file/folder name, the number of revisions made to that file/folder, the "Age" (stating the elapsed time since the last revision), the "Author" (who made the last revision), and the last log comment/entry. One way to access the source files is by clicking on the "Download GNU tarball" link. Another way to access them is by decompressing the jar file included with the initial download of the Logisim software.
Last semester, I took ICS 331 with Dr. Curtis Ikehara, which is a course entitled "Logic Design & Microprocessors," and about each week, he would assign us a lab assignment using our breadboards. In other words, understanding and building circuits was a huge portion of the class, and if only I had begun to think sooner about delving into the world of open source software, then I would have saved myself from unnecessary eraser markings and trouble.
Overview
Logisim is exactly as its name implies: a simulation of logic - of digital logic circuits, to be more specific. Its main function is to serve as a user-friendly interface for designing, building, and simulating digital logic circuits. More information can be found at its SourceForge.net website and Project Home website. In reference to Dr. Philip Johnson's Three Prime Directives, my assessment of Logisim is as follows.
Prime Directive #1: The system successfully accomplishes a useful task.
In less than five minutes, I was able to discover Logisim's most basic function without reading any tutorials by replicating a XOR circuit consisting entirely of NAND gates, as shown in the image below. After accessing the "Help" menu, I discovered the feature of building circuits automatically using input from a truth table, which is a useful function for building especially complex circuits. This program also features hierarchical circuit design, or the ability to use saved/already-existing circuits as portions of other circuits. In short, this system, without a doubt, successfully accomplishes a useful task (and more).
Prime Directive #2: An external user can successfully install and use the system.
It took about a minute or so to download, install, and run the program (the latter accomplished by a simple double-click on the OR gate icon labeled "Logisim"). For download and installation support, straightforward and concise "Getting Logisim" directions are found on the Project Home website, upon clicking the "Download" link on the left panel.
As mentioned above in the first prime directive, the interface is simple enough in design that the average user can construct a circuit in under five minutes. Click once (but do not hold down) on a logic component icon, such as a gate, move the mouse pointer over to the dotted area, position the icon, and place it down by clicking once more. Hovering the mouse pointer over the blue markings behind a gate, for example, creates a momentary green circle around the specific marking. This circle indicates a point from which the user can click, hold, and drag a solid line from one icon to another. The entire contents of the "Help" documentaion for Logisim can be found offline on the "Help" toolbar within the program or online at the developer's website. In short, this system is extremely easy to successfully install and use.
Prime Directive #3: An external developer can successfully understand and enhance the system.
Bugs, Feature Requests, and Support Requests could be reported/requested using the "Tracker" menu on Logisim's SourceForge.net toolbar. The News information under the "Develop" menu includes version release notes; the 2.7.1 entry here introduces a mailing list for development updates, which anyone can subscribe to at this link. There is also a separate Logisim Developers' mailing list and Google Group found on the "Support" menu.
Features, bug fixes, and design changes are highlighted on Logisim's Project Home website under "Release History," and Carl Burch, the original developer, also provides his e-mail address in the "Comments" section in case the SourceForge.net resources do not serve a user's needs.
At Logisim's "Develop" page on SourceForge.net, clicking the "browse code" link under "Repositories" displays a page that shows the file/folder name, the number of revisions made to that file/folder, the "Age" (stating the elapsed time since the last revision), the "Author" (who made the last revision), and the last log comment/entry. One way to access the source files is by clicking on the "Download GNU tarball" link. Another way to access them is by decompressing the jar file included with the initial download of the Logisim software.
In regards to the Java source code, I feel that there is a significant lack of comments in each of the ten sample files I have opened. Most of the comments are something along the lines of "access methods" or short three- to five-word one-liners. Only one of the files I opened actually had a few comments within a method itself. This style of commenting is in no way beneficial on my part or, as I can imagine, on anyone else's. Hence, despite the fact that some pretty solid support and documentation exists for the Logisim open source software, it does not meet the last prime directive because its source files are not well-outlined and well-commented. Nevertheless, I still recommend that every ICS 331 student take advantage of this software as soon as humanly possible.
Wednesday, August 24, 2011
Blog Title: "Geek Inertia, You're Despicable"
I spent the greater part of two hours trying to devise a creative title. Satisfied with the results? Yes, close enough. This title personifies my lifelong stance on learning about IT. Thanks, Daffy.
~ Proud [Warning: "Computer" is Inferred] Geek
~ Proud [Warning: "Computer" is Inferred] Geek
Subscribe to:
Posts (Atom)