Cool Solutions

What is enough testing?



By:

May 9, 2006 7:12 am

Reads: 4259

Comments:5

Score:0

buggie

There is always the question, more focus on new features or more focus on testing and making the product more stable. And the most difficult question, what is enough testing?

Today I was reading an article from Ed Foster on this subject, interesting reading.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Categories: Uncategorized

Disclaimer: This content is not supported by Novell. It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test it thoroughly before using it in a production environment.

5 Comments

  1. By:Flyingguy

    Excellent Question!

    Well when it comes to an e-mail system, can there really BE to much testing? Want to find the stuff that breaks it, you have only to read through the forums. People do the strangest things with GW, some things extending the functionality to some things that break it and some things that make it just plain act weird.

    The Last little fiasco should raise some flags, like, uhmm “Did anyone test this?” before it went out the door. As a developer I know how hard it is to test my own stuff.

    Besides a “Wide Area Beta Test” perhaps the next best thing is to, under **serious** no disclosure get people to send you entire systems and then put them through the wringer, following the “exact” instructions you provide and see if things blow up.

    Dont test it on a fully patched, very carefull tuned server, test it on a typical server, perhaps one that is a couple of patch revisions back, on some pretty vanilla hardware.

    I have LOTS of cumstomer that cant afford hi-end hardware, but instead opt for a clone box, with genuine intel mother boards, 3com nic cards and whatever IDE drive falvor of the week happens to come around at their favorite vender. These system typicaly cost less the $1000.00 as opposed to a juiced up Dell or HP department level server that by the time you get done adding all the parts bits and pieces end up cost $5000.00 plus!

    Have your typical trenches tech do an upgrade the quick and dirty way, by just jump starting the system with the DB schema files and installing the new agents and leaving the rest of it to do manualy, later, since very few people have the patience to sit there and watch the system take over an HOUR to copy the distribution directory as it plods through it doing whatever it does.

    Just a thought.

    VA:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
  2. By:Tadd Moore

    I think the solution is a lot simpler than this. There was a time when Novell proudly “ate it’s own dog food.” I think there are a few simple rules that can be established from this principle.

    1. If you’ve documented a “feature” in a product, it had better work – fully and without caveat – on each platform you support, exactly as described. Let your customers decide on a feature based on it’s description, not it’s quality.

    2. If you expect other people to buy and use your software and it’s features, you should lead by example and use it yourselves – all of it, not just selective parts. Novell’s IS&T NOC, while impressive, is a PR disaster in that it uses virtually none of the products Novell ships to perform those duties.

    In reference to smaller customers mentioned earlier, I certainly question whether or not people worried about the cost of server hardware can justify the cost of most Novell products…that said, integration testing should compliment the “eat your own dog food” testing Novell should be doing anyway. The newest GroupWise beta RC should be put into production, running on the same server as an instance of ZDM, etc.

    Customers have to cram as much software onto a single server as possible – it makes sense to do so…Novell should be testing for these scenarios. If we have to dedicate a server to each service we provide, we might as well just start building Windows servers…

    VA:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
  3. By:Ron van Herk

    Yes Tadd, I think you are right. Eating our own dog food is what gives us the real life feedback that is needed to increase the product quality.
    If I look at my own environment, I am running SLED on one of my machines and I use the latest GroupWise client on the other, so we do eat our own dog food in some extend. I agree we should make more use of this principle, why would we encourage customers to implement a product if we haven’t implemented it in our own environment.
    Novell however isn’t using every feature of the product in their environment, just like most of our customers won’t use every feature of the product. We are also not using all platforms we support for our products. Starting the roll-out of our own products internally just before shipment should be a standard, however it doesn’t replace the testing that needs to happen in the test-lab. This however get’s us back to the question, what is enough testing ;-)

    VA:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
  4. The mantra has been upgraded.

    No longer do we “eat our own dogfood” – we prefer to “drink our own champagne”.

    In all seriousness – Novell internal adoption provides a lot of early and vocal feedback on our products – from Open Enterprise Server to ZENworks to Identity Manager – and everything in between.

    One thing I personally miss are the ‘Novell IS&T beige papers’.

    VA:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
  5. By:Tadd Moore

    That’s a funny analogy, except that it’s true. The amount of champagne one can consume and remain coherent varies wildly from person to person…interestingly, so does the perceived level of quality amongst Novell products (grin).

    Ron – To elaborate on my theme above ever so slightly, what is most annoying as a customer is to attempt to implement a feature only to find it is broken (implying that we read docs, searched the knowledgebase, spoke to support, and found that we did nothing “wrong” or “unsupported”). Too often anymore, we find issues like these without looking very hard. We ask ourselves the question “How can anyone have tested this?” at an increasingly rapid pace.

    I genuinely believe that most of the issues we find like this aren’t due to someone’s incompetence, but are due to oversight/negligence. Meaning, not every feature was considered ‘test-worthy’ perhaps…implying there were testing assumptions that said “Nobody will use it this way.” If that’s true, it shouldn’t be a feature. Otherwise, it needs to be tested.

    In so far as Novell is not only testing each documented feature, but is using those features themselves, I think the vast majority of issues like this will be resolved before we (the customers) ever see them. That is what constitutes enough testing.

    Most of your customers are pretty reasonable (I mean, come on…) We understand that software is increasingly complex and that bugs will be found…we just don’t want to feel like we’re the ones doing the product testing (which is what happens when we find a terribly obvious defect that should never have seen the light of day.)

    It’s a heck of a challenge, but it’s the good fight, and you have the respect of my entire team for taking it on.

    VA:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)

Comment

RSS