Novell Home

Novell Cool Solutions

Monthly Archives: June 2006

GWIA Testing (part 2)



By:

June 29, 2006 3:39 pm

Reads:2,882

Score:Unrated

In an earlier post I described how you can take a MIME.822 from a received internet mail and drop it back in the GWIA.  In this post I want to tell you the kind of things we look for when we get these.

A common issue is bad boundary definitions seperating the parts of a multipart mail.  Normally, near the top of the message you’ll see a line like this:

Content-Type: multipart/mixed; boundary=”=__Part6B4800F3.0__=”

This basically means ‘there are many parts of this mail and they are seperated by this boundary: =__Part6B4800F3.0__=”

Every time a new part is needed (attachment, graphic etc) the boundary is inserted, preceded by 2 hyphens (–) so:

–=__Part6B4800F3.0__=

We can use this as many times as we want and, in fact, we can declare new boundary definitions inside other boundaries.

When we are done with a boundary definition, or want to end the mail, then we use 2 trailing hyphens, so:

–=__Part6B4800F3.0__=–

All of this is governed by RFC, or internet rules.  2821 and 2822 for basic SMTP and 2045 thru 2049 for MIME. So, what problems have I seen with these?

I have seen mails with boundaries that are too long, the RFC allows 70 characters so boundaries that exceed this violate RFC.  Doing a global search and replace on the boundary will allow the mail to be processed – any string can be  used as long as it is unique in that mail.

I have seen mailers either not using the final boundary or missing the two last hyphens, so GWIA thinks there is more to come in the message. I have seen mailers putting content after the final boundary so it doesn’t appear in the client.  The thing that these all have in common is that they are not RFC compliant.

Does anyone else have good examples?  Messages that they are having problems with?

+read more


Categories: Uncategorized

order and chaos



By:

June 29, 2006 3:10 pm

Reads:2,980

Score:Unrated

versions

we got a lot of feedback after we announced the availability of Designer 1.2RC1 and RC2. most of the comments came from customers who were confused that we release a 1.2 version as an update to a 2.0M2 milestone. here comes the reasoning behind the version order and chaos:

Designer releases iteratively. this means at the beginning of each major release development cycle we define what the scope for this major release is. a major release is 1.0 or 2.0 etc. this initial scope is just a rough idea gathering and setting goals to say something like: in Designer 2.0 we want to see Version Control and Staging as the two major new feature areas and we want to improve our overall user experience etc.
then we break down this rough scope into milestones. this is where the iterations come in. so we broke Designer 2.0 into 7 milestones where M7 will be 2.0. we then get loaded at the beginning of each milestone and scope this milestone out in detail. usually beside our rough goals, we get A LOT of additional feedback from our customers which we try to absorb into one of the milestones. then we implement the milestone and get loaded for the next. we roughly fix about 500 bugs per milestone and get about 100 enhancements in.

now what happened with 2.0 M3 and 1.2:

at the time we scoped out Designer 2.0, there were no official plans for another official Designer release before 2.0. that’s why we made the decision to start working on something called 2.0 at the beginning of this calendar year. but then, as things evolved, it turned out that we had to release another supported release of Designer together with IDM 3.0.1 (SP1). now since we develop iteratively, we don’t branch our code base (at least we try notto branch if ever possible). so we basically had to put SP1 specific changes into our running 2.0 trunk. looking at what’s new in SP1, it really would not make sense to release a Designer 2.0 just for Spitfire SP1 and we would not have any of our 2.0 main features in by then anyway. so we decided to do a marketing version 1.2 and release 2.0 M3 as 1.2 to the public so that it makes sense to the end customer who got 1.1 with IDM 3.0 and will now get 1.2 with IDM 3.0.1.

we are thinking of including a build number in the future so you could see what our “real” version is versus the marketing version. really, don’t put too much weight into the marketing version. it’s just a name.

+read more


Categories: Uncategorized

Ratings – part 2

coolguys

By:

June 29, 2006 2:32 pm

Reads:2,896

Score:Unrated

I blogged earlier about the ratings system.

Here is your chance to decide on the look and feel of the site.

Which style do you prefer:
SNAG-0256.png

Currently the bars are enabled – I’ll turn on the stars tomorrow to show the difference.

Just leave your comments below.

Written at: Draper, UT

+read more


Categories: Uncategorized

bug=bad. many bugs=bad product?



By:

June 29, 2006 1:36 pm

Reads:3,144

Score:Unrated

bug in my last post i stated that we fixed over 900 bugs since designer 2.0m2. after posting i realized that i probably painted an inaccurate picture of what 2.0m2 was and what the overall product quality was. at least for people who are not familiar with the way we use bugs to manage the designer project it may be difficult to correctly interpret my statement. designer is high quality, extremely customer focused and constantly evolving.

most of novell engineering uses bugzilla as a development process, problem and enhancement tracking tool. bugzilla holds records – referred to as bugs – that are assigned to a responsible person and categorized into products and feature areas per product. each of these records (or bugs) has a severity and a priority assigned. the severity is set by the reporting party and is meant to indicate how important the record is where as the priority is assigned internally and determines the attention (importance) the record gets.

the severity field divides the records into two different groups: problems and enhancement requests. if a record’s severity is set to “enhancement” it describes a non-existing but desired functionality. all other levels assume a problem and indicate the importance of the problem.

now what makes the difference between an enhancement request and a bug? at first this seams obvious but very often it is not: designer does not currently provide any team enablement features like version control. a record (bug?) which asks for team enablement and version control would thus be given a severity of “enhancement”. if designer during a deploy was going to destroy data, a record (bug?) reporting this issue would probably be assigned a severity level of “major” or “critical”. but there is a lot of space between these two examples. imagine designer behaves in a way that is unexpected (wrong?) for you. if you reported this and requested a change of behavior would that be an enhancement request or a problem report? difficult to say.

we in the designer team use bugs for release planning, too. whoever has an idea enters a bug. often the severity level doesn’t even matters. then, when we get loaded to do our next milstone, we pull up all the open bugs and prioritize them. this way we make sure no idea is being forgotten. you as a customer can do the same. our bugzilla system is open for you. you have an idea or a problem? go to bugzilla.novell.com and tell us about it.

back to my original post: over 900 bug fixes does not mean we had 900 critical problems we fixed. it means that we worked down a queue of over 900 records in our bugzilla database who describe our current release. from the over 900 bugs approx. 110 were marked as enhancement requests right away. this leaves us with over 790 records. most of these are bugs reported by our internal test team. they try every day to break our code and report every success they have. this way we make sure what we ship is of shipping quality. so 2.0m2 was not a milestone that had over 900 problems, but 1.2 (2.0m3) is a release with loads of new features and many, many adoptions requested by you and many, many, many problems fixed before they even were able to hit you, the customer, because they were found by our internal test team.

+read more


Categories: Uncategorized

ZENworks Design Series :: Application Packaging (Part 3)

coolguys

By:

June 29, 2006 11:55 am

Reads:3,517

Score:Unrated

Now I want to hear from you. So far we’ve covered (from a pretty high level) using online resources such as AppDeploy and MSI Wisdom to get the packaging job done… and we all know there is a wealth of information out there ready to be shared.

I’d like you to now share some of your ‘hidden’ resources that we can take a look at in greater detail. The community is watching.

Cheers.

+read more


Categories: Uncategorized

Cool Blogs post ‘ratings’

coolguys

By:

June 29, 2006 11:42 am

Reads:3,209

Score:Unrated

We added the ability to rate each and every post.

The idea is to let you give some feedback on the quality of the blog article – rather than ‘I agree’ or ‘I disagree’.

ratings-1.png

Written at: Draper, UT

Under each post title you will see a ratings bar – with some information on the votes so far. The example above shows that Ken Muirs post on GroupWise security had three votes with an average rating of 3.33.

To vote move your mouse over the rating bar – select the score for the post – and click. Remember the weighting is from 1 through 5 with a higher number representing a better rating.

ratings-2.png

Once you click the magic of Cool Blogs comes to life and your vote is registered. It’s all shiny new Ajax – so no need to refresh.

ratings-3.png

We will be using these ratings to show the ‘best of Cool Blogs’ and the ‘Top Ranked Bloggers’ in the near future.

Let me know how you find these ratings.

Written at: Draper, UT

+read more


Categories: Uncategorized

More Global Momentum on ODF



By:

June 29, 2006 9:12 am

Reads:73

Score:Unrated

The ODF Alliance, of which Novell is a founding member, put out a press release today highlighting global developments in support of ODF. It cites recent decisions by governments in Belgium, France and Denmark to promote use of ODF as a standard for document exchange in those countries. The ODF Alliance also announced nice progress …

+read more


Categories: Expert Views, PR Blog

ZENworks Design Series :: Middle Tier Fault Tolerance

coolguys

By:

June 28, 2006 8:04 pm

Reads:3,563

Score:Unrated

This question has been coming up a lot lately… again!! So I thought I would address this here and see what the masses are dealing with.

What is the best way to build a fault tolerant Middle Tier Server system that also addresses load balancing and redundancy?

First things first. If you have thousands of users, and they are all pointing to a central Middle Tier Server farm, then you need to front the farm with an L4 switch (if possible), or wrap Microsoft Load Balancing Services around them (if you are running the Middle Tier in IIS on Windows 2000 Server or Windows Server 2003). I highly suggest an investment in an L4 switch simply because it is more reliable than Microsoft Load Balancing Services, or any other software-based load balancing service. You can also use DNS Round Robin, but this sucks when it comes to one of the nodes being down… DNS Round Robin will still attempt to connect the user to that node. In addition, by using an L4 switch you are able to introduce load balancing to the equation as well… very nice!!

This all being said, a fault tolerant Middle Tier Server system is not fault tolerant unless you are pointing each of the Middle Tier Servers to multiple eDirectory sources. I suggest you point each of the Middle Tier Servers to 2 or 3 eDirectory servers so that in the event one of the eDirectory servers goes down, you can still authenticate your users and get access to the ZENworks infrastructure. You can easily configure this using NSAdmin – the administrative interface for the Middle Tier Server software.

People have also asked whether or not the Middle Tier Server runs in a cluster. This is NOT supported. The main issue is that when a node fails over, the connections are not moved to the new node. This means that the service can fail over, the connections would be cleared, and the next time the launcher refreshes the users will be prompted to log back in to the Middle Tier Server. Stinky.

Lastly, I want to introduce a number of factors that govern the scalability of the Middle Tier Server. You need to consider all of these things when trying to figure out how many Middle Tier Servers you will need for the number of users you support. So… remember this:

  • Speed of the processor
  • Multi-processor server(s)
  • Physical memory
  • Speed of the NIC
  • Speed of the LAN/WAN
  • Staggered login times (let’s be real… everyone doesn’t log in at the same time)
  • Staggered launcher refresh intervals (use this setting in large scale environments)
  • Frequency of distributions (applications, policies, etc.)
  • Whether or not applications are being force cached (if they are, then the content goes through the Middle Tier Server)
  • Whether or not you are accessing application data via CIFS or NCP
  • Where are your identities stored? Way back when ZENworks for Desktops 4 was introduced I ran a pile of tests in the Super Lab over and over again. Each time we would move services around to find the optimal placement. Obviously when you are running eDirectory, Active Directory, and application file services (the ZENworks application content) on the same server this server becomes seriously taxed under heavy load. In large environments ensure that eDirectory, and Active Directory is running on dedicated servers, and you place your data/content on a file server that is NOT one of the directory servers. That being said, keep this in mind.
    • Location of eDirectory and connection speed
    • Location of Active Directory and connection speed
  • Are policies being delivered using ZENworks or Active Directory?

In closing, keep these factors in mind along with the recommendations on how to make your Middle Tier Server infrastructure fault tolerant and you will be all set. Design your stuff in your lab, think through how many people you are going to support, what you are delivering, where you are delivering it to, and how you will be delivering it. Follow these rules of thumb and you should sleep well.

Comments, suggestions, experience, etc… it’s all welcome!!  :)

Cheers.

+read more


Categories: Uncategorized

GroupWise FTF followup



By:

June 28, 2006 3:44 pm

Reads:3,710

Score:Unrated

I started responing to the comments in the other thread, but when my response got longer than the original post I decided to just create a new post.

Where to start? I think the first place is the Beta status of the FTFs – technically they are Betas which is why they contain Beta strings and appear in the Beta patches section, not the public patches section.  They are considered Beta for the reasons in my next paragraph.

How much testing goes into an FTF? Historically it’s been very limited.  I assume lots of you know about the engineering process but for those that don’t let me tell you what happens (at least in GW at Novell).  When we ship a product we ‘branch’ the code.  That is to say we create a new project in our code control system.  One branch will become the new product in development and the other branch is for the SP to the product we just shipped.  The only code changes going into the SP branch are fixes to issues that customers report – none of the code for new features in a future version go into the SP branch.
Often these changes are single line, or even single character changes and, therefore, are considered relatively low risk.  I should say this is not always the case – don’t want to deceive you.
So, back to the testing question – the changes are sanity checked, the developer tests the change, the NTS employee (me) tests the change and often the reporting customer tests the change.  If all goes well then the change is committed to the code contol system.   So, the fix is tested, but we typically do not perform a full top down regression test.  Time between FTF updates does not permit this.  The other thing that I need to say is that FTFs are interchangable – the backend is just a set of NLMs/RPMs that you can swap out at will – if the FTF doesn’t work, just swap it out.  I have never seen corruption issues caused by FTFs in my 7+ years on GW, if that allays anyone’s fears.
Periodically we will release a build of the SP branch as FTFs.  In the past we had released just the files that changed, a-la NetWare, but that got way too complicated to test and be sure that something else wasn’t broken by the build mismatches.  We, therefore, decided to release each component as a set even if some files hadn’t changed.  More recently we decided to release all components simultaneously, so that issues around shared files didn’t crop up (GWENNx in particular).
Next I need to address the supportability of these patches.  I can only talk for GW, and maybe Ron wants to pipe up for ZEN policy, but the GW team will NEVER refuse you support for a system running FTFs.  I have worked in both of Novell’s GW support centers and this is the case in both – if a GW engineer tells you different then point them in my direction.  On the other hand, we also don’t really expect customers to go rolling out FTF code enterprise wide, just because it’s available – we expect some personal responsibility on the part of our customers, whether the rollout is an FTF or a fully fledged support pack.  My personal recommendation is very much ‘if it ain’t broke, don’t fix it’ – ie, if you aren’t suffering with one of the problems fixed in the FTF then don’t install it.

Lastly, time between FTF releases and full patch releases.  We normally have a general idea when we are targetting an SP release for, and it’s normally timed to coincide with the Consolidated Support Pack release.  This means that there are many months between SPs and we know that in advance – that’s why we release the FTFs – to provide relief in the short term.  OK, off home now – sleep well :)

+read more


Categories: Uncategorized

Time for hands-on evaluation



By:

June 28, 2006 1:17 pm

Reads:93

Score:Unrated

You’ve been reading about it, and maybe even writing about it. But if you haven’t had an opportunity to take it for a spin, here’s your chance. Preview versions of SUSE Linux Enterprise 10 products, for both the server and desktop, are now available for download and evaluation.The news brief is here, and the software …

+read more


Categories: Expert Views, General, PR Blog

RSS

© 2014 Novell