Novell is now a part of Micro Focus

My Favorites


Please to see your favorites.

Best practice for providing kernel core dumps to support incidents

This document (7010056) is provided subject to the disclaimer at the end of this document.


SUSE Linux Enterprise Server 10
SUSE Linux Enterprise Server 11


After a system crash a kernel core was written. The dump should get uploaded to SUSE Technical Services for further investigation.


First, it needs to be clarified what information a kernel core dump can provide and what it cannot.
A kernel core dump is an extract of the system's memory at the time when the dump is taken.
Information in a core dump are helpful to see the system state and environment at a certain time.
That also sets the limit of the core dump usability.
A core dump is for investigating kernel crashes. It is most of the time not helpful for investigating system lockups due to other reasons, e.g. blocked i/o, resource limits etc.
It also normally cannot be used to investigate performance issues or applications crashes on the system.

Next, it should be considered which information within the system memory are important for investigation.
Since it is a kernel crash that should be analyzed, the memory that is occupied by the kernel should be in the dump.
Under certain circumstances, it is also useful to have the memory, occupied by userspace processes in the dump.
Mostly irrelevant are allocated, but unused memory pages, memory pages of the filesystem cache and not allocated pages.
Usually, the useful memory pages are only a small fraction of the whole memory.
Therefore the not useful information should be filtered out already when taking the dump.
This can be done by setting the dump level in:


please read the chapter:

Dump Filtering and Compression

in the kdump manpage, available via:

man 7 kdump

in order to choose the right number for the dumplevel.
dumplevel 15 should be fine for most situations.

If a filtering directly during the core dump cannot be done, there is also the possibility to filter the created core afterwards.
Please refer to TID:

for further details.

A filtered and compressed dump is usually not bigger than a few Gb and could be uploaded to the novell ftp servers.
These are for EMEA:

for US:

For uploading, an ftp client needs to be used.
The login is done with username:


When prompted for a password, just type enter.
For, example, uploading a file to via the native linux ftp client, would look like this:

> ftp
Connected to support
220 Welcome to Powered by SUSE Linux. All transfers are logged. An ftp over SSL version of this service is available from It uses a DigiCert High Assurance EV Root CA for secured downloads. Download the root CA from
Passive Ports are 30000-35000                                         

Name ( anonymous

331 Anonymous login ok, send your complete email address as your password


230 Anonymous access granted, restrictions apply
Remote system type is UNIX.
Using binary mode to transfer files.

ftp> cd in

250 CWD command successful

ftp> put test.txt

local: test.txt remote: test.txt
229 Entering Extended Passive Mode (|||27150|)
150 Opening BINARY mode data connection for test.txt
27      454.60 KiB/s    00:00 ETA
226 Transfer complete
27 bytes sent in 00:00 (1.86 KiB/s)

ftp> quit

221 Goodbye.

In the above example, YOURNAME is replaced by the username you are logged in into your system.
When uploading a core dump to the ftp server, please always make sure that it is uploaded in binary mode.
An upload in ascii mode will corrupt the dump and makes it unusable for investigation.
Afterwards, please inform your support contact about the filename and location of the coredump file.
The upload speed to the ftp servers can vary depending on the network load and the load of the ftp server.
For a core dump of 10Gb it usually takes about 2 hours to upload.

When uploading larger files, it is adviceable to split them into smaller chunks that upload faster.
On SLES, you can use the split utility therefore, e.g:

split -b 2G test123 test123

will split the 10 Gb dump test123 into chunks of 2 Gb size with the suffix a*, e.g:

ls -lh test123a*
-rw-r--r-- 1 root root 2,0G 26. Jan 11:31 test123aa
-rw-r--r-- 1 root root 2,0G 26. Jan 11:31 test123ab
-rw-r--r-- 1 root root 2,0G 26. Jan 11:31 test123ac
-rw-r--r-- 1 root root 2,0G 26. Jan 11:31 test123ad
-rw-r--r-- 1 root root 2,0G 26. Jan 11:31 test123ae

There is also the possibility to provide the dump at a local download location and ask the support contact to download it from there.
Finally, if the dump is really big and there are too many problems uploading it, the dump can be send via Mail.
The address is:

Maxfeldstr. 5
90409 Nuernberg

Please replace SUPPORTCONTACT with the name of the support person that is processing your request.


This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:7010056
  • Creation Date:25-JAN-12
  • Modified Date:22-JUL-14
    • SUSESUSE Linux Enterprise Server

Did this document solve your problem? Provide Feedback