SAP ASE

BackUp #

DUMP Syntax #

Makes a backup copy of the entire database, including the transaction log, in a form that can be read in with load database. Dumps and loads are performed through Backup Server.

The target platform need not be the same as the source platform where the dump database operation occurred. However, there are restrictions when using combinations of cross-platform dump and load when compression and remote backupserver are involved. See Restrictions for dump database.

DUMP DATABASE #

dump database <database_name> [cumulative] [ using config = <config_name> ]
    to emory::[compression=<compression_level>]::<stripe_device>
    [
        [ stripe on emory::[compression=<compression_level>]::<stripe_device> ]
        ...
    ]
    [ with {
        blocksize = <number_bytes>,
        verify [= header | full]
        notify = {client | operator_console},
        }
    ]

DUMP LOGS #

dump tran <database_name> [ using config = <config_name> ]
    [
        to emory::[compression=<compression_level>]::<stripe_device>
        [ stripe on emory::[compression=<compression_level>]::<stripe_device> ]
        ...
    ]
    [ with {
        blocksize = <number_bytes>,
        verify [= header | full]
        notify = {client | operator_console},
        }
    ]

DUMP Parameters #

  • config = <config_name> - You cannot specify a stripe directory as a parameter for the command if you use a dump configuration. The SAP ASE server creates the dump files in the stripe directory specified by the dump configuration. The dump files are named using this convention: reads the specified dump configuration and performs a dump operation using the specified values. < database_name>.< dump_type>.< date-timestamp>.< stripeID>. Explicitly specified command parameters override the parameter values specified by the dump configuration.
  • cumulative - specify that the backup you create is a cumulative incremental dump.
  • database_name - is the name of the database from which you are copying data. The database name can be specified as a literal, a local variable, or a stored procedure parameter.
  • to emory::[compression=<compression_level>]::<stripe_device> - is the device to which to copy the data.
  • compression_level - is the compression level specified for current stripe.
  • blocksize = <number_bytes> - overrides the default block size for a dump device. The block size must be at least one database page (2048 bytes for most systems) and must be an exact multiple of the database page size. For optimal performance, specify the blocksize as a power of 2, for example, 65536, 131072, or 262144.
  • with verify[= header | full] - allows the Backup Server to perform a minimal header or structural row check on the data pages as they are being copied to the archives. There are no structural checks done at this time to gam, oam, allocation pages, indexes, text, or log pages. The only other check is done on pages where the page number matches to the page header. Any faults found are reported in the backupserver error log.
  • stripe on <stripe_device> - is an additional dump device. The Backup Server splits the database into approximately equal portions, and sends each portion to a different device. Dumps are made concurrently on all devices, reducing the time required to make a dump, and requiring fewer volume changes during the dump. Each stripe starts 2 symbultbufs when doing local dumps.
  • notify = {client | operator_console} - overrides the default message destination.
    • On operating systems that offer an operator terminal feature, volume change messages are always sent to the operator terminal on the machine on which Backup Server is running. Use client to route other Backup Server messages to the terminal session that initiated the dump database.
    • On operating systems that do not offer an operator terminal feature, such as UNIX, messages are sent to the client that initiated the dump database. Use operator_console to route messages to the terminal on which Backup Server is running.

Using iSQL statements #

Full #

1> dump database D04 using config = emory_S2_C1
2> go
Backup Server: 4.172.1.4: The value of 'allocated pages threshold' has been set to 40%.
Backup Server session id is: 66. Use this value when executing the 'sp_volchanged' system stored procedure after
fulfilling any volume change request from the Backup Server.
Backup Server: 4.132.1.1: Attempting to open byte stream device: 'emory::compression=1::D04.DB.20200521.120708.000::000'
Backup Server: 4.132.1.1: Attempting to open byte stream device: 'emory::compression=1::D04.DB.20200521.120708.001::001'
Backup Server: 6.28.1.1: Dumpfile name 'D04201420AA6C    ' section number 1 mounted on byte stream
'emory::compression=1::D04.DB.20200521.120708.000::000'
Backup Server: 4.166.1.1: Using dbiosize of 262144 bytes for device
emory::compression=1::D04.DB.20200521.120708.000::000.
Backup Server: 4.165.1.1: Using iocount of 1 for device emory::compression=1::D04.DB.20200521.120708.000::000.
Backup Server: 4.166.1.2: Using zonesize of 262144 bytes for device
emory::compression=1::D04.DB.20200521.120708.000::000.
Backup Server: 4.166.1.3: Using blocksize of 16384 bytes for device
emory::compression=1::D04.DB.20200521.120708.000::000.
Backup Server: 4.165.1.2: Using numzones of 3 for device emory::compression=1::D04.DB.20200521.120708.000::000.
Backup Server: 4.165.1.3: Using archcnt of 2 for device emory::compression=1::D04.DB.20200521.120708.000::000.
Backup Server: 4.165.1.4: Using dbdevcnt of 2 for device emory::compression=1::D04.DB.20200521.120708.000::000.
Backup Server: 4.166.1.4: Using pagesize of 16384 bytes for device
emory::compression=1::D04.DB.20200521.120708.000::000.
Backup Server: 6.28.1.1: Dumpfile name 'D04201420AA6C    ' section number 1 mounted on byte stream
'emory::compression=1::D04.DB.20200521.120708.001::001'
Backup Server: 4.166.1.1: Using dbiosize of 262144 bytes for device
emory::compression=1::D04.DB.20200521.120708.001::001.
Backup Server: 4.165.1.1: Using iocount of 1 for device emory::compression=1::D04.DB.20200521.120708.001::001.
Backup Server: 4.166.1.2: Using zonesize of 262144 bytes for device
emory::compression=1::D04.DB.20200521.120708.001::001.
Backup Server: 4.166.1.3: Using blocksize of 16384 bytes for device
emory::compression=1::D04.DB.20200521.120708.001::001.
Backup Server: 4.165.1.2: Using numzones of 3 for device emory::compression=1::D04.DB.20200521.120708.001::001.
Backup Server: 4.165.1.3: Using archcnt of 2 for device emory::compression=1::D04.DB.20200521.120708.001::001.
Backup Server: 4.165.1.4: Using dbdevcnt of 2 for device emory::compression=1::D04.DB.20200521.120708.001::001.
Backup Server: 4.166.1.4: Using pagesize of 16384 bytes for device
emory::compression=1::D04.DB.20200521.120708.001::001.
Backup Server: 4.188.1.1: Database D04: 665498 kilobytes (1%) DUMPED.
Backup Server: 4.188.1.1: Database D04: 1279354 kilobytes (2%) DUMPED.
Backup Server: 4.188.1.1: Database D04: 1931660 kilobytes (3%) DUMPED.
.....
.....
.....
Backup Server: 4.188.1.1: Database D04: 56470024 kilobytes (51%) DUMPED.
Backup Server: 4.188.1.1: Database D04: 57111270 kilobytes (52%) DUMPED.
Backup Server: 4.188.1.1: Database D04: 57743380 kilobytes (52%) DUMPED.
Backup Server: 3.43.1.1: Dump phase number 1 completed.
Backup Server: 3.43.1.1: Dump phase number 2 completed.
Backup Server: 3.43.1.1: Dump phase number 3 completed.
Backup Server: 4.188.1.1: Database D04: 58280526 kilobytes (100%) DUMPED.
Backup Server: 3.42.1.1: DUMP is complete (database D04).
1>

Incremental #

dump database D04 cumulative 
    using config = emory_S2_C1
    with verify=header
go

Logs #

Back Up using SAP DBACockpit #

Follow the steps:

  1. Access transaction DBACOCKPIT from your SAP GUI.

  2. Continue to start the configuration alt text

  3. Select Database Dump from the action drop down list. alt text

  4. Select the Database and the Dump Configuration. (For backint interfaces a dump configuration is mandatory. If you don’t have any, please follow the steps in create dump configuration to create one.)

alt text

  1. Execute. Review the summary and execute the process. alt text

  2. Finish. The job has been launched. You can check for the results in the SAP DBACockpit Logs. alt text

Back Up using SAP Control Center #

Back Up database #

Access the Administration Console in SAP Control Center, and follow the steps:

  1. Jump to Schema Objects-Databases-User Databases, select your database and select Back Up alt text

  2. Select Backup using configuration and the Dump Configuration for your database. (For backint interfaces a dump configuration is mandatory. If you don’t have any, please follow the steps in create dump configuration to create one.)
    alt text

  3. Type of Backup. Select the type of backup that you want to perform. alt text

  4. Options. Leave the use modified options option unckecked to use the configuration profile options. alt text

  5. Summary. Review the summary and finish the process. alt text

  6. Activity Log. You will see the backup output if you have selected not to schedule the job. alt text

Dump History #

View Dump records using different tools:

iSQL #

sp_dump_history [  @operation = {'list' | 'purge' | 'listfiles' | 'help'}]|
'listpurgefiles' | 'purgefiles' | 'upgrade' | 'downgrade'}][, @until_time =
'date'] [, @name = '<database or file name>'][, @dump_type = {'DATABASE' |
'TRAN[SACTION]' | 'CONFIG[URATION]' | 'CUM[ULATIVE]' | 'ALTERDB'}][, @status =
{'success' | 'fail' | 'deleted'}][, @file = '<filename>'][, @version = '1.0']
(return status = 0)

SAP Control Center #

alt text

Restore #

Restores a database to especific backup or to specified Point in Time.

Alternative command line tool: sybrestore This tool will provide us with a wizard so we can recover a database using dumphist catalog file

LOAD Syntax #

Loads a backup copy of a user database, including its transaction log, that was created with dump database, as well as materializes archive databases that have been loaded with a database dump.

The target platform of a load database operation need not be the same platform as the source platform where the dump database operation occurred. dump database and load database are performed from either a big endian platform to a little endian platform, or from a little endian platform to a big endian platform.

LOAD #

load database <database_name> [cumulative]
    from emory::[compression=<compression_level>]::<stripe_device>
    [
        [stripe on emory::[compression=<compression_level>]::<stripe_device> ]
        ...
    ]
    [with {
        listonly=load_sql | create_sql,
        blocksize = <number_bytes>,
        until_time = <datetime>,
        headeronly,
        notify = {client | operator_console},
        verify only [= header | full | crc],
        copyonly = <path>
    ]

LOAD LOGS #

load tran[saction] <database_name>
    from emory::[compression=<compression_level>]::<stripe_device>
    [
        [stripe on emory::[compression=<compression_level>]::<stripe_device> ]
        ...
    ]
    [with {
        blocksize = <number_bytes>,
        notify = {client | operator_console},
        verify only [= header | full | crc],
        listonly = load_sql | create_sql,
        until_time = <datetime>,
        headeronly,
        copyonly = <path>
    ]

LOAD Parameters #

  • cumulative - specify that the backup you create is a cumulative incremental dump.
  • database_name - is the name of the database from which you are copying data. The database name can be specified as a literal, a local variable, or a stored procedure parameter.
  • to emory::[compression=<compression_level>]::<stripe_device> - is the device to which to copy the data.
  • compression_level - is the compression level specified for current stripe.
  • blocksize = <number_bytes> - overrides the default block size for a dump device. The block size must be at least one database page (2048 bytes for most systems) and must be an exact multiple of the database page size. For optimal performance, specify the blocksize as a power of 2, for example, 65536, 131072, or 262144.
  • with verify[= header | full] - allows the Backup Server to perform a minimal header or structural row check on the data pages as they are being copied to the archives. There are no structural checks done at this time to gam, oam, allocation pages, indexes, text, or log pages. The only other check is done on pages where the page number matches to the page header. Any faults found are reported in the backupserver error log.
  • stripe on <stripe_device> - is an additional dump device. The Backup Server splits the database into approximately equal portions, and sends each portion to a different device. Dumps are made concurrently on all devices, reducing the time required to make a dump, and requiring fewer volume changes during the dump. Each stripe starts 2 symbultbufs when doing local dumps.
  • notify = {client | operator_console} - overrides the default message destination.
    • On operating systems that offer an operator terminal feature, volume change messages are always sent to the operator terminal on the machine on which Backup Server is running. Use client to route other Backup Server messages to the terminal session that initiated the dump database.
    • On operating systems that do not offer an operator terminal feature, such as UNIX, messages are sent to the client that initiated the dump database. Use operator_console to route messages to the terminal on which Backup Server is running.
  • listonly = { load_sql | crearte_sql } -
  • header_only - displays header information for a single dump file, but does not load the database . headeronly displays information about the first file on the tape unless you use the file =<file_name>option to specify another file name. The dump header indicates:
    • Type of dump (database or transaction log)
    • Database ID
    • File name
    • Date the dump was made
    • Character set
    • Sort order
    • Page count
    • Next object ID
    • Checkpoint location in the log
    • Location of the oldest begin transaction record
    • Old and new sequence dates
  • copyonly = path - extracts a transaction log file from a dump that is stored on a storage device, such as Tivoli Storage Manager or Legato (one file per device or stripe). Using load tran <database_name> from with copyonly=<path_prefix> does not load pages from the dump in <database_name>, but rather, copies pages to device stripes using the path specified in the copyonly option.
  • until_time - loads the transaction log up to a specified time in the transaction log. Only transactions committed before the specified time are saved to the database.

Recovery Examples #

Example 1 - Get LOAD sequence #

1> LOAD DATABASE D04 WITH LISTONLY=LOAD_SQL
2> go
LOAD DATABASE D04 FROM 'emory::compression=9::D04.DB.20200522.205006.000'
   STRIPE ON 'emory::compression=9::D04.DB.20200522.205006.001'
   STRIPE ON 'emory::compression=9::D04.DB.20200522.205006.002'
   STRIPE ON 'emory::compression=9::D04.DB.20200522.205006.003'
go

Example 2 - Get LOAD sequence and Database creation #

1> LOAD DATABASE D04 with listonly=create_sql
2> go
DISK INIT
    name = 'D04_data_001'
    , physname = '/sybase/D04/sapdata_1/D04_data_001.dat'
    , size = '129043M'
    , directio = true
go
DISK INIT
    name = 'D04_log_001'
    , physname = '/sybase/D04/saplog_1/D04_log_001.dat'
    , size = '10G'
    , directio = true
go
CREATE  DATABASE D04
    ON D04_data_001 = '2G'
    LOG ON D04_log_001 = '10G'
go
ALTER  DATABASE D04
    ON D04_data_001 = '86032M'
    , D04_data_001 = '20G'
    , D04_data_001 = '20G'
go
master..sp_dboption D04, 'abort tran on log full', true
go
master..sp_dboption D04, 'allow nulls by default', true
go
master..sp_dboption D04, 'allow wide dol rows', true
go
master..sp_dboption D04, 'ddl in tran', true
go
master..sp_dboption D04, 'deferred table allocation', true
go
master..sp_dboption D04, 'full logging for all', true
go
master..sp_dboption D04, 'allow incremental dumps', true
go
master..sp_dboption D04, 'deallocate first text page', true
go

Example 3 - Get LOAD sequence (Point In Time) #

1> LOAD DATABASE D04 
2> WITH LISTONLY=LOAD_SQL, 
3> UNTIL_TIME="may 25, 2020 18:33:43:866pm"
2> go
LOAD DATABASE D04 FROM 'emory::compression=9::D04.DB.20200522.205006.000'
   STRIPE ON 'emory::compression=9::D04.DB.20200522.205006.001'
   STRIPE ON 'emory::compression=9::D04.DB.20200522.205006.002'
   STRIPE ON 'emory::compression=9::D04.DB.20200522.205006.003'
go
LOAD DATABASE D04 CUMULATIVE FROM
'emory::compression=9::D04.CDB.20200525.182412.000'
   STRIPE ON 'emory::compression=9::D04.CDB.20200525.182412.001'
   STRIPE ON 'emory::compression=9::D04.CDB.20200525.182412.002'
   STRIPE ON 'emory::compression=9::D04.CDB.20200525.182412.003'
go
LOAD TRAN D04 FROM 'emory::::D04.TRAN.20200525.182604.000'
go
LOAD TRAN D04 FROM 'emory::::D04.TRAN.20200525.183801.000'
   WITH UNTIL_TIME = 'May 25 2020  6:33:43:866PM'
go

Example 4 - Header verification #

1> LOAD DATABASE D04 FROM 'emory::compression=9::D04.DB.20200522.205006.000'
2> STRIPE ON 'emory::compression=9::D04.DB.20200522.205006.001'
3> STRIPE ON 'emory::compression=9::D04.DB.20200522.205006.002'
4> STRIPE ON 'emory::compression=9::D04.DB.20200522.205006.003'
5> WITH HEADERONLY
6>
7> go
Backup Server session id is: 113. Use this value when executing the
'sp_volchanged' system stored procedure after fulfilling any volume change
request from the Backup Server.
Backup Server: 6.28.1.1: Dumpfile name 'D0420143124FE    ' section number 1
mounted on byte stream 'emory::compression=9::D04.DB.20200522.205006.002::002'
Backup Server: 6.28.1.1: Dumpfile name 'D0420143124FE    ' section number 1
mounted on byte stream 'emory::compression=9::D04.DB.20200522.205006.000::000'
Backup Server: 6.28.1.1: Dumpfile name 'D0420143124FE    ' section number 1
mounted on byte stream 'emory::compression=9::D04.DB.20200522.205006.001::001'
Backup Server: 6.28.1.1: Dumpfile name 'D0420143124FE    ' section number 1
mounted on byte stream 'emory::compression=9::D04.DB.20200522.205006.003::003'

General:
	This is a full database dump
	Name: D04
	Id: 4
	Dump date: May 22 2020  8:50:06:923PM
	Database page size:  16384.
	Number of pages: 8913920
	Checkpoint RID: (183648, 5)
	Next object id: 689836051
	Sort order id: 99
	Status: 128
	Charset id: 190
	Database durability: UNDEFINED
	Compression level: N/A
	Database encrypted: NO
	Database Encryption Key: N/A

Versions:
	Adaptive Server version: Adaptive Server Enterprise/16.0 SP02 PL05 HF1/EBF 26759 SMP/P/x86_64/Enterprise Linux/ase160sp02pl05
	Backup Server version: Backup Server/16.0 SP02 PL05 HF1/EBF 26759/P/Linux AMD Opteron/Enterprise Linux/ase160sp02pl05x/535

	Database log version: 7
	Database upgrade version: 41

Dump timestamps:
	Full: 0x0000 12ab6f8c
	Cumulative: 0x0000 12ab08a0
	Delta: 0x0000 00000000

Sysusages:
	segmap: 0x00000003 lstart=0 vstart=[11, 0]  lsize=131072 unrsvd=0
	segmap: 0x00000004 lstart=131072 vstart=[5, 0]  lsize=655360 unrsvd=652729
	segmap: 0x00000003 lstart=786432 vstart=[11, 1048576]  lsize=5506048 unrsvd=1683248
	segmap: 0x00000003 lstart=6292480 vstart=[11, 45096960]  lsize=1310720 unrsvd=1305600
	segmap: 0x00000003 lstart=7603200 vstart=[11, 55582720]  lsize=1310720 unrsvd=1305600

	Number of logical pages: 8913920 (139280 MB)
	Number of physical pages: 8913920 (139280 MB)

Sysdevices:
	devtype=0, vdevno=11, devname=D04_data_001,
		physname=/sybase/D04/sapdata_1/D04_data_001.dat,
		devsize=66070288, dbdevsize=8258560, vstart=0,
		devstat=2, devstat2=1
	devtype=0, vdevno=5, devname=D04_log_001,
		physname=/sybase/D04/saplog_1/D04_log_001.dat,
		devsize=5242880, dbdevsize=655360, vstart=0,
		devstat=2, devstat2=1

Sysdatabases:
	status=0x2200, crdate=May 12 2020  9:11:06:706AM,
		dumptrdate=May 22 2020  1:16:08:080PM, status2=0x0001,
		audflags=0x0000, deftabaud=0, defvwaud=0, defpraud=0,
		def_remote_type=0, status3=0x20000, status4=0xa0484000,
		durablility=1, lobcomp_lvl=0, inrowlen=0,
		dcompdefault=1 status5=0x0000

Sysattributes:
		flmode: 0x000d

Features found in the database or transaction dump header:

  ID =  0 : 15.7.0.000  : Database accepts DOL rows with large variable-length offsets
  ID =  2 : 15.7.0.000  : Database has tables marked as having non-materialized columns
  ID =  3 : 15.7.0.007  : Database has compressed tables at version 1
  ID =  4 : 15.7.0.000  : Database has system catalog changes made in 15.7 GA
  ID =  6 : 15.7.0.007  : Database has In-row LOB columns
  ID =  7 : 15.7.0.020  : Database has system catalog changes made in 15.7 ESD#02
  ID = 11 : 15.7.0.100  : Database has the Sysdams catalog
  ID = 12 : 15.7.0.042  : Dealloc FTP after NULL update is enabled database-wide
  ID = 13 : 15.7.0.100  : Database has indexes sorted using RID value comparison
  ID = 16 : 16.0.00.00  : Index compression.

Scheduling #

For scheduling backups you can use any common tool used for Sybase

  • Sybase scheduler
  • SAP Control Center
  • DB13
  • Custom Scripts