Click here to view and discuss this page in DocCommentXchange. In the future, you will be sent there automatically.

SQL Anywhere 12.0.1 » UltraLite - Database Management and Reference » UltraLite performance tips » UltraLite benchmark tips » Methodology


The creation phase

You must create tests that yield reliable results. Otherwise, you cannot legitimately compare results over time.

The following characteristics make a benchmark test effective and reliable:

  • Goal   Are you looking to capture a performance ratio or are you trying to see the duration required to process a command against a database? For the former, if you are testing SQL performance, you may want to run one or more statements repeatedly until a set time interval has expired. This testing gives you the throughput ratio, which can be summarized as follows:
    statement-number ÷ time-interval = throughput ratio 

  • Environment   Establish a test environment as your baseline and record the design and scope of it. If you cannot run the same test under the same conditions, you cannot legitimately compare results of that test. Additionally, the hardware and software you use in the lab as part of your benchmark test should match that of your production environment.

  • State   Reliable benchmark tests always start each iteration with the same action. Decide whether or not third-party applications should operate concurrently with UltraLite. If they affect performance, you should add them to the benchmark test design. For third-party applications that should not be running, always exit these applications completely—even minimized or idle applications/processes could skew results because memory is still being used.

  • Results   Results of benchmarks must be captured in a consistent way after each iteration of the test. Over time the results can indicate a trend and help you determine what changes can yield an improvement in UltraLite performance—either in the database or in the application (or both).

  • Timing mechanism   Benchmark tests simulate user actions; therefore, you typically track the elapsed execution times of these actions. Ensure your timing mechanism is systematic so execution times are accurately reflected in the results of your tests.

 See also