1.7 Multithreaded Programming

Multithreading is common in multiclient distributed applications. Typically, a client-server NLM uses a different thread group for each client to which it provides service. This allows the NLM to service multiple clients concurrently. In addition, NLM applications often establish specialized threads, such as display, input, and communication threads. These threads can be used, for example, to accept commands from the server console, receive incoming requests, and send outgoing replies.

An efficient way to handle multiple clients is for the NLM to create a new thread for each client it services. As the NLM receives client requests, it creates a new thread to process each request. Then, after the request is serviced, the thread runs to the end of its initial procedure and is terminated.

There are cases where the method mentioned above would not be efficient. For example, if you are servicing 250 users and have 250 threads with 8 KB stacks, then just the stacks of these threads take up 2,000 KB of memory. In this case you might want to establish a pool of threads to handle multiple clients. As the NLM receives client requests, it selects a free thread to process the request. After the thread processes the request, it returns to the pool of free threads.

Using multiple threads has many advantages. It allows you to:

Simplify code through modularization
By separating processes into threads, programs become more easily read, maintained, and updated. Multithreading relieves the developer of having to use task switching logic.
Increase throughput
By dividing the NLM into multiple threads, you can reduce the amount of time the CPU remains idle. Instead of blocking during I/O requests, the OS switches control to another thread and more fully utilizes the CPU.
Enhance response time
Because the server is able to switch between threads (thus preventing a single thread from monopolizing the CPU), clients receive faster replies to their requests. A lengthy I/O-intensive request from one workstation does not preclude the completion of a smaller request from another workstation.
Develop multiple contexts through thread grouping
A thread group consists of one or more threads, as defined by the programmer. Threads in the same thread group share the same context, such as the CWD and current connection. This provides the programmer with shortcuts, such as the ability to use the CWD instead of specifying the full pathname.

The advantages of a multithreaded application are increased performance and efficiency. In a multithreaded application, processes get more equal time to use system resources. Additionally, any thread can process separately from other threads. For example, a process can update files in the background while a foreground process produces data that needs to be written to those files. Similarly, one thread can be used to interact with a client process while performing complex, time-consuming computations in the background.

The following is a simple example showing the creation of multiple threads:

Creating Multiple Threads

  #include <stdio.h>  
  #include <stdlib.h>  
  #include <process.h>  
    
    
  int   getOut = FALSE;  
  int   twoOut = FALSE;  
  int   threeOut = FALSE;  
  int   fourOut = FALSE;  
    
    
  void ThreadTwo();  
  void ThreadThree();  
  void ThreadFour(void *data);  
    
    
  main()  
  {  
     BeginThreadGroup(ThreadThree, NULL, NULL, NULL);  
     ThreadSwitch();  
    
     BeginThread(ThreadTwo, NULL, NULL, NULL);  
     ThreadSwitch();  
    
     while (!kbhit())  
        printf("Thread One.\n");  
     getOut = TRUE;  
    
     // allow all threads to clean up before NLM exits  
     while (!(twoOut && threeOut && fourOut))  
        ThreadSwitch();  
  }  
    
    
  void ThreadTwo()  
  {  
     while (!getOut)  
     {  
        printf("           Thread Two\n");  
        ThreadSwitch();  
     }  
     twoOut = TRUE;  
  }  
    
    
  void ThreadThree()  
  {  
     BeginThread(ThreadFour, NULL, NULL, "THREAD FOUR");  
     while (!getOut)  
     {  
        printf("In Thread Three\n");  
        ThreadSwitch();  
     }  
     threeOut = TRUE;  
  }  
    
    
  void ThreadFour(void *data)  
  {  
     while (!getOut)  
     {  
        printf("            %s\n",(char *) data);  
        ThreadSwitch();  
     }  
     fourOut = TRUE;  
  }
  

See Threads Concepts for more information about threads, thread groups, and the context that the NetWare API maintains. The Section 1.8, Context also discusses context issues.

1.7.1 Shared Memory

For true reentrant NLM programming functionality, see the Libraries for C (LibC). CLib does not truly support this functionality—not without considerable resource leaks. For example, before NetWare 4.11 (when CLib started automatically cleaning up allocated semaphores), an abend occurs if you fail to manually deallocate semaphores. Plus, whenever CLib is unloaded, it almost always sends messages about unfreed memory resources.

However, LibC was designed to support reentrant NLM programming and the use of the REENTRANT flag. LibC's support of this functionality is mainly due to its context architecture and its ability to react to all penetrations of the library.

Shared memory allows multiple threads to communicate. To share memory among threads in the same NLM, use a global or static pointer to a single block of memory. The following example uses a global pointer to share memory among thread groups in the same NLM:

Using a Global Pointer to Share Memory among Thread Groups

  int SharedMemoryFlag = 0;  
  char *SharedMemory;  
   
  void ThreadGroup2()  
  {  
     while (!SharedMemoryFlag)  
        ThreadSwitch();  
     strcpy (SharedMemory,  
        "ThreadGroup2 has accessed shared memory.");  
     SharedMemoryFlag = 0;  
  }  
   
  main()  
  {  
     /* Start the second thread group */  
     if (BeginThreadGroup (ThreadGroup2,0,0,0) == EFAILURE)  
     {  
        printf ("BeginThreadGroup failed.\n");  
        exit(0);  
     }  
   
  /* Allocate the memory to be shared. Note that SharedMemory  
   * could have been defined as an array, if desired. */  
     SharedMemory = malloc (100);  
     if (SharedMemory == NULL)  
     {  
        printf ("Could not allocate memory.\n");  
        exit(0);  
     }  
   
  /* Store a string in the allocated memory and print it */  
  strcpy (SharedMemory, "Main ThreadGroup has accessed shared memory.");  
  printf ("%s\n",SharedMemory);  
   
  /* Let ThreadGroup2 know it is OK to access the memory */  
  SharedMemoryFlag = 1;  
   
  /* Wait for ThreadGroup2 to access the memory */  
  while (SharedMemoryFlag)  
     ThreadSwitch();  
   
  /* Print the message stored by ThreadGroup2 */  
  printf ("%s\n",SharedMemory);  
  free (SharedMemory);  
  }
  

If you want to use shared memory with multiple NLM applications, write a function that passes the memory address pointer among the modules. The following example shows the first NLM setting up values to share with the second NLM:

Setting up Values to Share Memory with Another NLM

  /*  —— FIRST NLM ——  *  
   * This NLM must be loaded first. Its .LNK file    *  
   * exports the two shared values, SharedMemoryFlag *  
   * and SharedMemory.                               *  
   *  —————- */  
  int SharedMemoryFlag = 0;  
  char *SharedMemory;  
   
  main()  
  {  
     /* Allocate the memory to be shared. Note that  
      * SharedMemory could have been defined as an array,  
      * if desired. */  
     SharedMemory = malloc (100);  
     if (SharedMemory == NULL)  
     {  
        printf ("Could not allocate memory.\n");  
        exit(0);  
     }  
    
     /* Store a string in the allocated memory and print it */  
     strcpy (SharedMemory,  
        "The main NLM has accessed shared memory.");  
     printf ("%s\n",SharedMemory);  
       
     /* Let the other NLM know it is OK to access the   
        memory */  
     SharedMemoryFlag = 1;  
   
     /* Wait for the other NLM to access the memory */  
     while (SharedMemoryFlag)  
     ThreadSwitch();  
   
     /* Print the message stored by the other NLM */  
     printf ("%s\n",SharedMemory);  
     free (SharedMemory);  
  }
  

The following directive file should be used to link the first NLM:

  form novell nlm ’Example of Shared Memory between NLM applications’  
  name     nlm1  
  file     prelude,nlm1  
  import   @clib.imp  
  export   SharedMemoryFlag, SharedMemory
  

The following example shows the second NLM using the shared values set up by the first NLM:

A Second NLM Sharing Memory with the First

  /*  —— SECOND NLM ——  *  
   * This NLM must be loaded after the first. Its     *  
   * .LNK file imports the two shared values.         *  
   *  —————-  */  
  extern int SharedMemoryFlag = 0;  
  extern char *SharedMemory;  
   
  main()  
  {  
     while (!SharedMemoryFlag)  
        ThreadSwitch();  
        strcpy (SharedMemory, "The second NLM has accessed shared
              memory.");  
      SharedMemoryFlag = 0;  
  }
  

The following directive file should be used to link the second NLM:

  form novell nlm ’Example of Shared Memory between NLM applications’  
  name     nlm2  
  file     prelude,nlm2  
  import   @clib.imp, SharedMemoryFlag, SharedMemory
  

NOTE:For the NetWare® 4.x, 5.x, and 6.x OS, NLM applications can only share memory with modules that are loaded in the same domain (address space for NetWare 5.x and 6.x). For example NLMs loaded into OS address space can share memory among themselves, as can NLMs loaded into a protected address space. However, NLMs in the OS address space cannot share memory with NLMs loaded into a protected address space.

1.7.2 Thread Termination

Programming successfully for the thread termination process, especially CLIB threads, is discussed in Advanced NLM Tasks within the NDK: NLM Development Concepts, Tools, and Functions documentation.

1.7.3 Relinquishing Control

The NetWare 5.x and 6.x OS offers optional preemption for applications written to be preemptable and marked preemptive with MPKXDC.EXE. However, with previous versions of the NetWare OS that do not timeslice or preempt thread execution, the responsibility of relinquishing control falls to the thread itself. To relinquish control of the processor in nonpreempting NetWare versions, a thread can do one of the following:

Call a function that can relinquish control
For example, if a thread calls printf, it can relinquish control because printf writes to a device. However, this method should not be used in a program that must be guaranteed that control is relinquished.

Functions that might block are identified in the function descriptions.

Call ThreadSwitch
ThreadSwitch passes control of the CPU to the OS, which then passes control to the next thread in the run queue. The calling thread is placed at the end of the run queue.
Call delay or ThreadSwitchWithDelay
These functions suspend thread execution for a specified time (in milliseconds). ThreadSwitchWithDelay is can be used with the NetWare 4.x, 5.x, and 6.x OS, but it has been added to the 3.11 version of CLIB.

IMPORTANT:Threads that do busy waiting in NetWare 4.x, 5.x, and 6.x need to allow low priority threads to run. For this reason, these threads should call ThreadSwitchWithDelay, instead of ThreadSwitch. Low priority threads can only run when there are no threads waiting on the RunList, and ThreadSwitch places the threads that call it on the RunLIst. ThreadSwitchWithDelay places threads that call it on the DelayedList.

Call SuspendThread
The SuspendThread function puts a thread to sleep until it is awakened.

NOTE:A sleeping thread can be awakened only by calling ResumeThread from another thread.

Call ThreadSwitchLowPriority
The ThreadSwitchLowPriority function suspends thread execution and places the thread in the Low-Priority Queue. This function can be used with the NetWare 4.x, 5.x, and 6.x OS, but it has not been added to the 3.11 version of CLIB.
Wait on an event
The OS automatically puts to sleep any threads waiting on events. For example, if a thread waits in a semaphore queue, it relinquishes control.

IMPORTANT:Do not use this method when waiting to read a file from a disk. If the file is stored in cache memory, the thread does not have to wait and does not relinquish control.

One side effect of failing to relinquish control is that incoming client requests are still received by the server, but the packets cannot be processed. Thus, without acknowledgment of the request, the client connection eventually times out.