Collection Contents Index System and media failures Using a transaction log mirror pdf/chap20.pdf

User's Guide
   PART 4. Database Administration and Advanced Use
     CHAPTER 20. Backup and Data Recovery       

Database logs


The following logs protect your data from system and media failure.

All of these logs play a role in data recovery. Each database running on a server has its own checkpoint log, rollback log, and transaction log. Optionally, you can also maintain a mirror of the transaction log for greater protection of vital data.

Top of page  The checkpoint log

Checkpoint log purpose 

The checkpoint log is used during database recovery after a system failure or improper shutdown of the server. The checkpoint log is stored in the database file.

A database file is composed of pages. Before a page is updated (made dirty), a copy of the original is always made. The copied pages are the checkpoint log.

Dirty pages are not written immediately to the database file on disk. For improved performance, they are cached in memory and only written to the database file when the cache is full or the server has no pending requests. A checkpoint is a point at which all dirty pages are written to disk. Following a checkpoint, the checkpoint log is deleted.

Reasons for a checkpoint 

A checkpoint can occur for several reasons:

Top of page  Checkpoint priority

The priority of writing dirty pages to the disk increases as the time and the amount of work since the last checkpoint grows. The priority is determined by the following factors:

The checkpoint and recovery urgencies are important only if the server does not have enough idle time to write dirty pages.

For Info     For a description of the command-line options, see The database server.

Top of page  How the database decides when to checkpoint

Optional information    
You do not need to know the information in this section for most purposes. It is provided as background information for those who wish to understand more about how the server works.

The idle I/O task 

The writing of dirty pages to disk is carried out by a task within the server called the idle I/O task. This task shares processing time with other database tasks, according to a priority. The lower the priority of the idle I/O task, the less time it gets.

There is a threshold for the number of dirty pages, below which writing of database pages does not take place.

When the database is busy, the urgency is low, and the cache only has a few dirty pages, the idle I/O task runs at a very low priority and no writing of dirty pages takes place.

Once the urgency exceeds 30%, the priority of the idle I/O task is increased. At intervals, the priority is increased again. As the urgency becomes high, the engine shifts its primary focus to writing dirty pages until the number gets below the threshold again. However, the engine only writes out pages during the idle I/O task if the number of dirty pages is greater than the threshold.

If, because of other activity in the database, the number of dirty pages falls to zero, and if the urgency is 50% or more, then a checkpoint takes place automatically, since it is a convenient time.

Both the checkpoint urgency and recovery urgency values increase in value until the checkpoint occurs, at which point they drop to zero. They do not decrease otherwise.

Top of page  The rollback log

As changes are made to the contents of tables, a rollback log is kept for the purpose of canceling changes. It is used to process the ROLLBACK statement for recovering from system failure. There is a separate rollback log for each connection. When a transaction is complete, the rollback log contents for that connection are deleted. The rollback logs are stored in the database file.

Top of page  The transaction log

All changes to the database are stored in the transaction log in the order that they occur. Inserts, updates, deletes, commits, rollbacks, and database schema changes are all logged. The transaction log is also called a forward log file.

The transaction log is stored in a separate file.

What you should do 

The transaction log is optional. If you run with no transaction log, a checkpoint is carried out whenever a transaction is committed. The checkpoint ensures that all committed transactions are written to the disk. Checkpoints can be time consuming, so you should run with a transaction log for improved performance as well as protection against media failure and corrupted databases.

For greater protection, you can maintain two identical transaction logs in tandem. This is called transaction log mirroring.

For Info     For information on creating a database with a mirrored transaction log, see The Initialization utility. For information on changing an existing database to use a mirrored transaction log, see The Transaction Log utility.

Keep the transaction log on a separate device 

The transaction log is not kept in the main database file. The filename of the transaction log can be set when the database is initialized, or at any other time when the server is not running, using the Log utility.

To protect against media failure, the transaction log should be written to a different device than the database file. Some machines with two or more hard drives have only one physical disk drive with several logical drives or partitions. If you want protection against media failure, make sure that you have a machine with two storage devices, or use a storage device on a network file server.

By default, the transaction log is put on the same device and in the same directory as the database—this does not protect against media failure.

Primary key definitions affect transaction log size 

Updates and deletes on tables that do not have a primary key or unique index cause the entire contents of the affected rows to be entered in the transaction log. If a primary key is defined, the engine needs only to record the primary key column values to uniquely identify a row. If the table contains many columns or wide columns, the transaction log pages will fill up much faster (reducing performance) if no primary key is defined. And if dbtran is used on the transaction log, it produces a very large command file.

This affects updates and deletes, but not inserts, which must always log all column values.

If a primary key does not exist, the engine looks for a UNIQUE NOT NULL index on the table (or a UNIQUE constraint). A UNIQUE index that allows null values is not sufficient.

Performance tip    
Placing the transaction log on a separate device can also result in improved performance by eliminating the need for disk head movement between the transaction log and the main database file.

Top of page  Converting transaction logs to SQL

The transaction log is not human-readable. The Log Translation utility (dbtran) can be used to convert a transaction log into a SQL command file, which can serve as an audit trail of changes made to the database. The following command uses dbtran to convert a transaction log:

dbtran sample.log changes.sql

You can also convert a transaction log to a SQL command file from Sybase Central.

For Info     For more information on the log translation utility, see The Log Translation utility.

Recovering uncommitted database changes 

The transaction log contains a record of everything, including transactions that were never committed. By converting the transaction log to a SQL command file and choosing the option to include uncommitted transactions (for example, by running dbtran with the -a switch) you can recover transactions that were accidentally canceled by a user. If the -a option is not chosen, the log translation utility omits transactions that were rolled back. While this is not a common procedure, it can prove useful for exceptional cases.

Top of page  

Collection Contents Index System and media failures Using a transaction log mirror pdf/chap20.pdf