User's Guide
PART 4. Database Administration and Advanced Use
CHAPTER 24. Monitoring and Improving Performance
Adaptive Server Anywhere is designed to provide excellent performance and to do so automatically. However, some tips help you achieve the most from the product. The following suggestions are a good starting point.
You might think that Adaptive Server Anywhere would run faster without a transaction log because it has to maintain less information on disk, yet the opposite is the case. Not only does a transaction log provide a large amount of protection, it can dramatically improve performance.
Without a transaction log, Adaptive Server Anywhere must always be sure that any changes to your database are written to the disk at the end of every transaction. Writing these changes can consume considerable resources.
With a transaction log, Adaptive Server Anywhere need only write notes detailing the changes. It can choose to write the new database pages more efficiently, later.
The process of writing information to the database file to make it consistent and up to date is called a checkpoint. Without a transaction log, Adaptive Server Anywhere must perform a checkpoint at the end of every transaction.
Tip |
If you can store the transaction log on a different physical device, rather than the one that contains the main database file, you can further improve performance. With this arrangement, you will reduce contention for the hard drives.
Adaptive Server Anywhere stores pages that it has used recently in a cache. The size of the cache is set on the command line when you launch the database server. Should another connect require the same page, it may find it already in memory and hence avoid reading information from disk.
If your cache is too small, Adaptive Server Anywhere is unable to keep pages in memory long enough to reap these benefits. When you launch the server, allocate as much memory to the database cache as is feasible, given the requirements of the other applications and processes that will run concurrently.
Tip |
In particular, databases that make use of Java objects benefit greatly from larger cache sizes. If you are using Java in your database, consider using a cache of at least 8 Mb.
In general, the information in each column of a table should depend solely on the value of the primary key. If this is not the case, then one table may contain many copies of the same information.
For example, suppose the people in your company work at a number of offices. You might better place information about the office, such as its address and main telephone numbers, in a separate table, rather than duplicating all this information for every employee.
You can, however, take the generally good notion of normalization too far. If the amount of duplicate information is small, you may find it better to duplicate the information and maintain its integrity using triggers, or other constraints.
When executing any one statement, Adaptive Server Anywhere must choose one index to access each table. When it cannot find a suitable index, it must instead resort to scanning the table sequentially—a process that can take a long time.
For example, if suppose you need to search for people, but may know only their first or only their last name. Create two indexes, one that contains the last names first, and a second that contains the first names first.
Examine the plans generated in response to your common statements. Considering adding an index when it will allow Adaptive Server Anywhere to access data more efficiently. In particular, add an index when it will eliminate unnecessary sequential access of a large table.
Although indexes let Adaptive Server Anywhere locate information very efficiently, you should exercise some caution when adding them. Each index creates extra work whenever you insert a row because Adaptive Server Anywhere must, in addition to adding the row, update all affected indexes. The same applies when you delete rows or update information in an indexed column.
If you need better performance when you add rows to a table and do not need to find information quickly, use as few indexes as possible.
Large page sizes can help Adaptive Server Anywhere to read databases more efficiently when the database is large or when you access information sequentially. If either of these criteria apply, try using 4 or 2 kb pages instead of 1 kb pages.
Larger pages also bring other benefits. They improve the fan-out of your indexes and reduce the number of index levels. Large pages also let you create tables with more columns.
You cannot change the page size of a database. You must create a new database and use the -p flag of dbinit to specify the page size. For example, the following command creates a database with 4 kb pages.
dbinit -p 4096 new.db
If you use larger pages, you must specify a large page size when you start a database server. A database server cannot open a database that uses pages larger than the size you chose when you started it. You specify the maximum page size using the -gp flag. You should increase your cache size. A cache of the same size will accommodate only a fraction of the number of the larger pages, leaving less flexibility in arranging the space.
The following command starts a server that reserves an 8 Mb cache and can accommodate databases of page sizes up to 4096 bytes.
dbsrv6 -gp 4096 -c 8M -x tcpip -n myserver
In contrast, a small page size sometimes allows Adaptive Server Anywhere to run with less resources because it can store more pages in a cache of the same size. They are thus useful if your database must run on small machines with limited memory. Small pages can also help in situations when you use your database primarily to retrieve small pieces of information from random locations.
The benefits of smaller pages are not always realized. Smaller pages hold less information and may force less efficient use of space, particularly if you insert rows that are slightly more than half a page in size.
Disk drives operate much more slowly than modern processors or RAM. Much of what slows a database server is waiting for the disk to read or write pages.
You almost always improve database performance when you put different physical database file on different physical devices. For example, while one disk drive is busy writing out swapping database pages to and from the cache, another device can be writing to the log file.
Notice that to gain these benefits, the two or more devices involved must be independent. A single disk, partitioned into smaller logical drives, is unlikely to yield benefits.
Adaptive Server Anywhere uses four types of files:
database (.db)
transaction log (.log)
transaction log mirror (.mlg)
temporary (.tmp)
The first is your database file. It holds the entire contents of your database. A single database is contained in a single file. You choose a location for it appropriate to your needs.
The second is the transaction log file. Effective recovery of the information in your database in the event of a failure depends most on the transaction log file. For extra protection, you can maintain a duplicate in a third type of file called a transaction log mirror file. Adaptive Server Anywhere writes the same information at the same time to each of these files.
Tip |
You can use the dblog transaction log utility to specify the location of the transaction log and transaction log mirror files.
Adaptive Server Anywhere may need more space than is available to it in the cache for such operations as sorting and forming unions. When it needs this space, it generally uses it intensively. The overall performance of your database becomes heavily dependent on the speed of the device containing the fourth type of file, the temporary file.
Tip |
Adaptive Server Anywhere examines the following environment variables, in the order shown, to determine a directory in which to place the temporary file.
TMP
TMPDIR
TEMP
If none of these is defined, Adaptive Server Anywhere places its temporary file in the current directory—not a good location for the best in performance.
If your machine has sufficient number of fast devices, you can gain even more performance by placing each of these files on a separate device. You can even divide your database into multiple data spaces, located on separate devices. In such a case, group tables in the separate data spaces so that common join operations will read information from different files.
Another similar strategy is to place the temporary and database files on a RAID device. Although such devices act as a logical drive, they dramatically improve performance by distributing files over many physical drives and accessing the information using multiple heads.
For information about data recovery, see Backup and Data Recovery.
For information about transaction log and transaction logs and the dbcc utility, see Administration utilities overview.
If your application is running in autocommit mode, then Adaptive Server Anywhere treats each of your statements as a separate transaction. In effect, it is equivalent to appending a COMMIT statement to the end of each of your commands.
Each application interface has its own way of setting autocommit behavior. For the Open Client, ODBC, and JDBC interfaces, Autocommit is the default behavior.
Instead of running in autocommit mode, you should consider grouping your commands so that each group performs one logical task. If you do disable autocommit, you must execute an explicit commit after each logical group of commands. Also, be aware that if logical transactions are large, and a isolation level of one or greater is used, blocking and deadlock can result.
The cost of using autocommit mode is particularly high if you are also not using a transaction log file. Every statement forces a checkpoint—an operation that can involve writing numerous pages of information to disk.
For more information about autocommit, see Setting autocommit or manual commit mode.
Your hard disk is excessively fragmented. This becomes more important as your database increases in size. In particular, the Windows 3.x server cannot do direct (fast) reading and writing when the database file is very fragmented. There are several utilities available to defragment your hard disk. One of these should be run periodically. You could put the database on a disk partition by itself to eliminate fragmentation problems.
If you are loading huge amounts of information into your database, you can benefit from the special tools provided for these tasks.
See Tuning bulk operations for methods to improve performance during bulk operations.