Jotting #11: Apple Macbook Pro – Still Cheap?

2008-03-01

The perennial question: Should I buy an Apple or a Windows-based laptop? The question came up recently for me since my old systems were getting a tad too old, and I also happened to come across this blog (a bit outdated of course by now, see below).

Some basic facts as they are available today (March 2008) in the table below. I chose a Dell since it is widely available and a very common and representative competitor.

Category Macbook Pro Dell XPS M1530
Processor 2.4 GHz Core 2 Duo (?, 3MB L2-cache) 2.4 GHz Core 2 Duo T7700 (800 MHz FSB, 4 MB L2-cache)
Memory 2 GB memory@667MHz DDR2 SDRAM 3 GB memory@667MHz DDR2 SDRAM
Display Size 15.4 in ditto
Graphics Card GeForce 8600M GT, 256MB SDRAM ditto
Resolution 1440 x 900 1280 x 800
Hard Drive 200 GB@5400rpm 320 GB@5400 rpm
OS Mac OS X v10.5 Leopard MS Windows Vista Home Premium
Price (£) 1299 926

The Dell version has been customised slightly by choosing a faster matching processor.

This is the best match in terms of hardware components I could find. They are quite close: the Dell scores better in the memory and hard-drive specs, the Apple in the resolution.

Hence, we are down to Max OS vs Windows Vista, the various bits of pre-installed software, the quality of the components (hard to judge from the outside) and the very personal matter of taste.

Currently, I personally just can’t get myself to pay a 40% premium for the Apple. One problem for me is that Apple seems to update its hardware specs only infrequently while others constantly upgrade or adjust prices (downwards). And so the Apple laptop starts to look more and more expensive over time. I will keep looking and comparing …

Advertisements

Jotting #10: Branching Models and all that

2007-11-21

Proper software configuration management (SCM) is often treated like an unloved child in software projects. I am talking not just committing code into a repository, but about creating reproducible releases, merging code between code-lines and all those things that are sometimes boring but necessary to provide proper control over your team’s coding efforts.

Let me tell you what we did in our current project; I learned a lot during it, mainly since I had to manage the releases most of the time. In the end, the whole process is less scary than I thought; I’ve become even quite relaxed about it, and merging code is no longer scary (but a bit boring).

I must stress that you must adopt your own release process and fit it to your circumstances.

A few words on our software and its installation as it has some bearing on our choice of branching model. The code is a Java webstart application, written using Swing, connecting to enterprise beans on a server. This implies that whenever a user starts the application he will be forced to worked with the most recently installed version; i.e., there is ever only one release in production.

Here is our process to release (assuming we are currently at production version 2.2.1 and use the Linux convention for numbering):

  1. A set of features is defined for the next release.
  2. When the features are implemented a release branch is created and named (e.g., release-2.4).
  3. Part of the team, the release team, completes the new release code, incl. final configuration, acceptance testing, release notes, etc.
  4. The release team releases a candicate for acceptance testing on a test server (tagged 2.4.0rc1).
  5. Bugs in acceptance testing are fixed and a new candidate (tagged 2.4.0rc2) is released. This continues until the code is accepted.
  6. The code is released (tagged 2.4.0).
  7. Any upcoming bugs of the released code will be fixed on the branch line, repeating steps 4-6, and releasing the fixed version (tagged 2.4.1, 2.4.2, …)
  8. After each release (candidate), the code changes are merged back into the development line (merges are tagged appropriately).
  9. The development team starts to work on the next set of features on the development line, repeating steps 1-9 for the next release (2.6 or 3.0).

(If I have time I may add a picture of this process.)

What are the advantages that we obtained from this process?

  • We have a clear, easily understandable and reproducible release process.
  • There is no significant code freeze period when preparing a release.
  • The process allows the team to allocate time efficiently and in parallel.
  • The process is quite agile and flexible; there is minimal burden on developers as many can continue working as if unaware of the release process.
  • The code can be placed under coninuous integration at all stages.
  • We can reproduce production releases at any time quickly.
  • Even during the testing process for a new release (e.g., 2.4.0rc2), we can release an emergency fix for the current production version (e.g., 2.2.1 to 2.2.2) without major upheaval.

It is also obvious that our installation allows us to choose this branching model since we never have more than three versions out there: the current production (e.g., 2.2.1), the current release candidate (e.g., 2.4.0rc2) and the development line (named 2.5).

If your circumstances are different (e.g, customers paid for different feature sets), you will have to come up with a different branching model to make it fit for your needs.

Some recommendations:

  • Think early about the branching model and release process suitable for your project; at least no later than when the first feature set is complete
  • Learn and use some of the branching patterns (see references)
  • Merge early and often (before the deltas become too large and are hard too merge)

Don’t be afraid of branching and merging; once you understand the process, its limitations and benefits, everything beomes much easier.

References:

PS

Eric Raymonds has started a page on version control systems. Worth keeping an eye on it.


Jotting #9: To be checked or Not to be checked – That’s the Java Exception

2007-11-13

Always arousing some emotions (see this recent example), this question whether you should use checked or unchecked exceptions in Java. Other languages don’t have that “choice”, so this is a pure Java problem. Elliot Rusty Harold posted his rules about this problem, introducing so-called external and internal exceptions, some time ago in July. I think these are too complicated and unnecessary but I promised myself to write about it.

Let me come out with my point-of-view right now: Avoid checked exceptions, stay away from them. They are not worth it. OK so that’s out. But some clarifications are in order:

  • I do not argue against declaring exceptions in a method’s signature:
    • In Java you can list both, checked and unchecked, exceptions in the method signature.
    • You should document the most important contract violations that the method will reject.
    • Better still, document the contract in the spirit of B Meyer’s Object-oriented Software Design.
  • I don’t ask that rather obvious requirements, like arguments not being null, are documented in detail.
  • I argue against the rule that checked exceptions have to be declared in each method of the calling stack unless they catch the exception.

What is the main purpose of an exception? It’s to indicate a contract violation (I think Bertrand Meyer’s language is the most appropriate to use).

Does the method raising the exception care about the context in which it is called? No, the method provides its own context: its contract. The rules of ERH about internal and external exceptions make no sense whatsoever in this model. Whether I can write to a file (object) or not, whether the provided argument is of the wrong type or not, whether the object’s state is within the method’s control or not, is irrelevant and is not the question that is being asked.

The method asks a much simpler and more general question: Do you obey my pre-conditions? It does not care what the intent of the caller is, what the calling context or the semantics are. Only the programmer may know that. No, the method just promises to fulfil its side of the contract (post-conditions) if the pre-conditions are fulfilled.

If a file instance is passed as an argument, the file is always outside the control of the called method. If it cannot write to the file because the file doesn’t exist or is write-protected, the method doesn’t care. It just says: You’re not fulfilling the pre-conditions of my contract; here is my exceptional response.

The whole distinction between external and internal exceptions makes no longer any sense (and you could always argue ERH’s examples in different ways depending on context). So we established that any distinction between checked and unchecked exceptions is based on some arbitrary classification of the caller that cannot be maintained inside the method throwing the exception. (You can easily do this for any other model, the checked-unchecked cases are always based on such classifications.)

So we are left with the question whether the compiler-enforced rule on checked exception provides any benefits, and we have to ask the questions:

  • Is it beneficial that every calling method repeats the exception in its signature?
  • In a proper object-oriented design, who carries the information about the problem?
  • Where is the exception being handled and resolved?

The first question should remind you immediately of DRY: don’t repeat yourself. The answer to the second is obviously the exception itself, not the method signature. And the final one is more based on the experience that, at least in applications with a user-interface, it is standard practice to throw the exception back to the user rather than to resolve the problem programmatically. There are some exceptions (no pun intended) like in message-based applications where the queue may try to re-send the message a few times and finally succeed, adding some robustness, but ultimately the problem is thrown back to the user (or left on the dead-letter queue!).

Developers of embedded systems could tell you how hard it can be to resolve exceptions adequately to prevent the system from crashing. Sometimes they just reset the system and log the fault; sometimes they just skip the faulty data and continue (and interpolate, for example, when taking sensor readings). (You may want to read more on this issue and exception design & patterns in the recent series of IEEE Software articles by R Wirfs-Brock.)

In the end we are left with the conclusion that Java’s checked exception were an experiment of good intentions but one that ultimately failed and didn’t provide any significant benefits but rather gave rise to a new anti-pattern: the silently caught exception a la catch(Exception e){ /* do nothing */ }.


Jotting #8: Voting and Requirements

2007-10-08

Recently, I attended a requirements course presented by Ian Alexander. When talking about prioritising requirements, he mentioned voting as a possible mechanism, and we did a little exercise/experiment. The outcome wasn’t quite what I had expected.

We used a simple business problem throughout the course; so by the time we did this exercise we were all reasonably familiar with this little case (at least we thought being all software engineers). We voted on several requirements and predictably some showed a very clear pattern: we all agreed that these requirements where essential (i.e. should go into the first release) or thought of them as luxury (don’t do them).

However, on some requirements the votes were much less decisive. In Alexander’s opinion this should be clear warning sign that there are different assumptions lurking behind the votes and that it would be worthwhile to dig deeper since there may be more requirements hidden in those tacit assumptions.

I had never before considered to use voting in requirements gatherings but this little exercise proved a nice eye-opener. I still have to work out how that links to my previous jotting.


Jotting #7: Groups and Wisdom

2007-10-08

A softer topic for a change. I am currently reading James Surowiecki‘s The Wisdom of Crowds, a very interesting book about why many non-experts can still beat the experts not just once but consistently. Puts some humility into all of us who consider themselves experts.

Especially chapter 9 Committees, Juries, and Teams: The Columbia Disaster and How Small Groups Can Be Made To Work got me thinking. We all participate in groups: ad hoc in meetings or longer term in teams but we have this prejudice that group work is often inefficient, the design-by-committee stereotype. Hence it is worthwhile to quote Surowiecki’s conclusions from this chapter:

[firstly] … group decisions are not inherently inefficient.  … [secondly] there is no point in making small groups part of a leadership structure if you do not give the group a method of aggregating the opinions of its members. If small groups are included in the decision-making process, then they should be allowed to make decisions. If an organisation sets up teams and then uses them for purely advisory purposes, it loses the true advantage that a team has, namely collective wisdom.

(It is important to understand under which conditions and for which problems crowds can be wise.)

This is strong stuff and not just fancy thinking but has been tested in various experiments. This is social sciences exciting and worthwhile.


Addendum: miniSPA 2007 – Sessions I didn’t attend …

2007-08-01

Of course, I couldn’t attend all sessions as they were held in two parallel blocks, and a colleague and I split to cover both sessions (maybe he will write about them as well).

But others did and blogged about them.

Serious JavaScript

Test Driven Development with JMock2

Design by Contract in Java


Jotting #6: miniSPA 2007 – Scoping Game

2007-07-29

In this session we played a little game using  a pot several million pounds (fake, of course) and some features to decide whether and when to implement a re-usable features across several similar products.

Note: I don’t like the term re-usable very much. Either something is usable or not. If it is a usable API and I can inject system-specific dependencies it’s still not re-usable but more usable and well refactored.

In many ways, this is a variation on portfolio management and by adding the necessary input parameters (likely costs and revenue streams, uncertainties and probabilities) it becomes mainly a mathematical exercise to work out the numbers. In the end, you still need to make a proper decision which products you want to advance and that often requires some gut instinct. At least the exercise can weed out the worst choices.

Nice exercise but I don’t think I’ll took much home from it.