The following is a list of suggestions to help you get the best performance out of MobiLink.
The following all contribute to the throughput of your synchronization system:
the type of device running your remote databases
the schema of remote databases
the data volume and synchronization frequency of your remotes
network characteristics (including for HTTP, proxies, web servers, and Relay Servers)
the hardware where the MobiLink server runs
your synchronization scripts
the concurrent volume of synchronizations
the type of consolidated database you use
the hardware where your consolidated database runs
the activity in the consolidated database, including all non-synchronization activity
the schema of your consolidated database
Testing is extremely important. Before deploying, you should perform testing using the same hardware and network that you plan to use for production. You should also try to test with the same number of remotes, the same frequency of synchronization, and the same data volume. The MobiLink replay tool can help with such testing. See MobiLink replay utility (mlreplay).
During this testing you should experiment with the following performance tips.
Avoid contention and maximize concurrency in your synchronization scripts.
For example, suppose a begin_download script increments a column in a table to count the total number of downloads. If multiple users synchronize at the same time, this script would effectively serialize their downloads. The same counter would be better in the begin_synchronization, end_synchronization, or prepare_for_download scripts because these scripts are called just before a commit so any database locks are held for only a short time. An even better approach would be to only count per remote ID and obtain the total later via a query.
For information about the transaction structure of synchronization, see Transactions in the synchronization process.
Use the MobiLink -w option to set the number of MobiLink database worker threads to the smallest number that gives you optimum throughput. You need to experiment to find the best number for your situation.
A larger number of database worker threads may improve throughput by allowing more synchronizations to access the consolidated database at the same time, but with it comes an increased potential for contention and blocking.
Keeping the number of database worker threads small reduces the chance of contention in the consolidated database, the number of connections to the consolidated database, and the memory required for optimal caching.
Do not set the -w and -wu options in a production system without first verifying the optimal settings for these options.
Large uploads can cause large transactions in the consolidated database and large transactions lead to more locks held in a transaction, which increases blocking and contention. This can have a significant adverse impact on both synchronization throughput and the consolidated database's overall throughput. Smaller uploads reduce blocking and contention, and may significantly improve throughput.
In a MobiLink synchronization system with SQL Anywhere remotes, smaller uploads can be sent via dbmlsync in one of two ways:
Use the -tu dbmlsync option for transactional uploads. Each transaction is sent separately. See -tu dbmlsync option.
Use the dbmlsync Increment (inc) extended option for incremental uploads. Each increment contains coalesced transactions. The bigger the increment, generally the more transactions are coalesced into one upload. See Increment (inc) extended option.
On the server side, the performance can be tuned by using the -tx option to batch a number of transactions from the client together into a single consolidated-side transaction. This option is handy in that once you set the client-side option, you can simply tune -tx without having to change the clients. See -tx mlsrv12 option.
Test and tune these client-side and server-side options for maximum throughput.
It is inefficient to include a BLOB in a row that is synchronized frequently while the BLOB remains unchanged. To avoid this, you can create a table that contains BLOBs and a BLOB ID, and reference the ID in the table that needs to be synchronized.
Set the maximum number of MobiLink database connections to be your number of synchronization script versions times the number of MobiLink database worker threads, plus one. This reduces the need for MobiLink to close and create database connections. You set the maximum number of connections with the mlsrv12 -cn option.
Ensure that the computer running the MobiLink server has enough physical memory to accommodate the cache in addition to its other memory requirements. Consider moving to a 64-bit platform if the server needs more than a 1.5 GB memory cache.
The number of synchronizations being actively processed is not limited by the number of database worker threads. The MobiLink server can unpack uploads and send downloads for a large number of synchronizations simultaneously. Once a server starts paging to disk, its throughput will fall significantly so it is very important that the MobiLink server has a large enough memory cache to process these synchronizations without paging to disk. Look for warning 10082 in the server log, or the "Cache is full" alert from the SQL Anywhere Monitor for MobiLink to detect when the cache is too small.
The MobiLink server automatically grows or shrinks its memory cache as appropriate. Use the -cmax, -cmin and -cinit options to control the memory cache for the MobiLink server.
You should dedicate enough processing power to MobiLink so that the MobiLink server processing is not a bottleneck. Typically the MobiLink server requires significantly less CPU than the consolidated database. However, using Java or .NET row handling adds to the MobiLink server processing requirement. In practice, network limitations or database contention are more likely to be bottlenecks.
The performance of your scripts in the consolidated database is an important factor. It may help to create indexes on your tables so that the upload and download cursor scripts can efficiently locate the required rows. However, too many indexes may slow uploads.
When you use the Create Synchronization Model Wizard in Sybase Central to create your MobiLink applications, an index is automatically defined for each download cursor when you deploy the model.
Use the minimum logging verbosity that is compatible with your business needs. By default, verbose logging is off, and MobiLink does not write its log to disk. You can control logging verbosity with the -v option, and enable logging to a file with the -o or -ot options.
As an alternative to verbose log files, you can monitor your synchronizations with the MobiLink Monitor. The MobiLink Monitor does not need to be on the same computer as the MobiLink server, and a Monitor connection has a negligible effect on MobiLink server performance. See MobiLink Monitor.
Operating systems restrict the number of concurrent connections a server can support over TCP/IP. If this limit is reached, which may occur when over 1000 clients attempt to synchronize at the same time, the operating system may exhibit unexpected behavior, such as unexpectedly closing connections and rejecting additional clients that attempt to connect. To prevent this behavior, either configure the operating system to have a higher TCP/IP connection limit and set the -nc option, or use the -sm option to specify a maximum number of remote connections that is less than the operating system limit.
When a client attempts to synchronize with a MobiLink server that has accepted its maximum number of concurrent synchronizations as specified by the -sm option, the client receives the error code -85 (SQLE_COMMUNICATIONS_ERROR). The client application should handle this error and try to connect again in a few minutes.
No significant throughput difference has been found between using Java or .NET synchronization logic vs. SQL synchronization logic. However, Java and .NET synchronization logic have some extra overhead per synchronization and require more memory.
In addition, SQL synchronization logic is executed on the computer that runs the consolidated database, while Java or .NET synchronization logic is executed on the computer that runs the MobiLink server. So, Java or .NET synchronization logic may be desirable if your consolidated database is heavily loaded.
Synchronization using direct row handling imposes a heavier processing burden on the MobiLink server, so you may need more RAM, perhaps more disk space, and perhaps more CPU power, depending on how you implement direct row handling.
If you have some tables that you need to synchronize more frequently than others, create a separate publication and subscription for them. When using synchronization models in Sybase Central, you can do this by creating more than one model. You can synchronize this priority publication more frequently than other publications, and synchronize other publications at off-peak times.
Take care to download only the rows that are required, for example by using timestamp synchronization instead of snapshot. Downloading unnecessary rows is wasteful and adversely affects synchronization performance.
Overly frequent synchronization can create an unnecessary burden on the MobiLink synchronization system. Carefully decide how often you need to synchronize. Test thoroughly to ensure performance expectations can be within the production environment.
For SQL Anywhere clients, you can significantly improve the speed of uploading a large number of rows by providing dbmlsync with an estimate of the number of rows that are uploaded. You do this with the dbmlsync -urc option.
See -urc dbmlsync option.
From the remote user's point of view, the more synchronization happens in the background, the less urgent it is for synchronizations to be as fast as possible. Consider designing your remote application to use background synchronization so that remote users can continue to work even when synchronizing.
Discuss this page in DocCommentXchange.
|Copyright © 2010, iAnywhere Solutions, Inc. - SQL Anywhere 12.0.0|