As of build 2000, SQL Anywhere 17.0 contains several minor enhancements and behavior changes.
To use some of these enhancements and changes, you must upgrade or rebuild your database. When you upgrade or rebuild a SQL Anywhere database to version 17.0 build 2000 or later, you cannot run the database on a database server earlier than version 17.0 build 2000.
SQL Anywhere now supports Mac OS X 10.11. When using Mac OS X 10.11, you may need to make the following adjustments to your application:
If you are using APIs that require libdbcapi, then you must set either DYLD_LIBRARY_PATH or, if DYLD_LIBRARY_PATH does not work with system integrity protection enabled, then use SQLANY_API_DLL. See Installing DBD::SQLAnywhere on Unix and Mac OS X, Installing sqlanydb on Unix and Mac OS X, The PHP extension on Unix and Mac OS X, Ruby programming, and XS JavaScript application programming.
If you are using Java with native libraries, such as JDBC, then you must set either DYLD_LIBRARY_PATH or, if DYLD_LIBRARY_PATH does not work with system integrity protection enabled, use the - Djava.library.path=/path/to/sqlanywhere/lib64 command-line option. See How to load the SQL Anywhere JDBC driver.
If you are using external functions, you must set either LD_LIBRARY_PATH or provide the full path to the shared library. See CREATE FUNCTION statement [External call], and CREATE PROCEDURE statement [External call].
If you are using a custom install layout, then you may still need to set DYLD_LIBRARY_PATH or, if DYLD_LIBRARY_PATH does not work with system integrity protection enabled, then use SQLANY_API_DLL. Use install_name_tool to provide additional search paths rather than using DYLD_LIBRARY_PATH.
Use statement performance summary to identify and troubleshoot slow statements. The SYS_RUN_PROFILER_ROLE is required to view the gathered data.See Tip: Identify the cause of slow statements.
The following changes have been made to support this update:
The GTSYSPERFCACHEPLAN system view contains a graphical execution plan for expensive statements. See GTSYSPERFCACHEPLAN system view.
The sp_top_k_statements system procedure identifies the specified number of statement/plan combinations with the longest runtime. See sp_top_k_statements system procedure.
The sp_find_top_statements system procedure details performance statistics collected for each logged statement/plan procedure. See sp_find_top_statements system procedure.
Specifying -k when starting the database server disables the statement performance summary for the Windows Performance Monitor. See -k database server option.
The new WHERE clause allows you to trace an event conditionally based on its properties. See CREATE TEMPORARY TRACE EVENT SESSION statement.
The new lagtime mirroring option specifies an approximate amount of time that a mirror server can lag behind the primary server in applying the transaction log. The primary server reduces its rate of transactions when the lag time approaches the value of this setting. See SET MIRROR OPTION statement and Performance considerations with database mirroring systems.
Previously, you used the DATEPART function to obtain the millisecond and microsecond portions of a TIMESTAMP expression. Now, you can use the new MILLISECOND and MICROSECOND functions. Also, a new EXTRACT function returns the various date parts of a TIMESTAMP and TIMESTAMP WITH TIMEZONE expression. The EXTRACT function is in the ANSI/ISO SQL Standard. See EXTRACT function [Date and time], MILLISECOND function [Date and time], MICROSECOND function [Date and time], and EXTRACT function.
Performance counters have been changed from 32-bit to 64-bit. See DB_PROPERTY function [System] and sa_db_properties system procedure.
Previously, if an error was encountered when rebuilding a database (the dbunload -ac, -an, or -ar options), then the rebuild operation would stop and the error had to be addressed before attempting the rebuild operation again. If there were many errors, then this was a time consuming process. The new -ae option allows you to request that the unload operation continue past any errors and through to completion, and then decide how to address the errors. If a statement in the reload.sql script fails, then the reload operation continues on to the next statement instead of stopping. Statements that fail are recorded in the unprocessed.sql script, and displayed in the command line output unless the -q option is specified. This enhancement allows you to see all of the errors at once and decide whether to fix all of the reported errors and rebuild the database from scratch, or apply the unprocessed.sql script on the unfinished database. See Unload utility (dbunload).
Previously, if a global temporary table was referenced by a connection, then the equivalent of a lock was placed on the table until the connection closed, regardless of whether of not the lock was needed. ALTER statements by other connections on the global temporary table were prohibited and would fail until the connection that started the lock ended. Now, an explicit schema lock is placed on the table when a connection initially references it but is only retained if the connection references it in cursor, or if the table has rows. If the table is not referenced by a cursor and does not have any rows, then the schema lock is dropped during a commit or rollback, and other connections can execute an ALTER statements on the table. Use the sa_locks system procedure to determine which global temporary tables have schema locks on them. See sa_locks system procedure.
Previously when starting an email session, the database server only supported PLAIN authentication of user IDs and passwords with SMTP servers. Now, the database server supports CRAM-MD5 authentication, as well as PLAIN authentication.
When you use the xp_startsmtp system procedure with the smtp_auth_username and smtp_auth_password parameters, the database server uses CRAM-MD5 authentication. If the SMTP server does not support CRAM-MD5 authentication, then the database server uses PLAIN authentication. If you specify the -fips database server option, then only PLAIN authentication is used. The -fips database server option does not support CRAM_MD5 authentication.
See xp_startsmtp system procedure and -fips database server option
The BEGIN PARALLEL WORK statement can improve performance by executing a list of CREATE INDEX or a list of LOAD TABLE statements in parallel. The number of available logical processors on the computer that the database server runs on, as well as the settings of the -gtc option and the max_parallel_statements option limits the number of statements that can execute at the same time.
The following statements, options, and utilities support this update.
BEGIN PARALLEL WORK statement . See BEGIN PARALLEL WORK statement.
CREATE INDEX statement . See CREATE INDEX statement.
LOAD TABLE statement. See LOAD TABLE statement.
The new max_parallel_statements option limits the number of statements inside a BEGIN PARALLEL WORK statement that can execute at the same time. See max_parallel_statements option.
The Unload utility (dbunload) supports a new -bp option to create reload.sql files that are optimized for the parallel execution of CREATE INDEX and LOAD TABLE statements. When the -bp option is specified, CREATE INDEX and LOAD TABLE statements are listed inside BEGIN PARALLEL WORK statements in the reload.sql file. See Unload utility (dbunload).
You can now use the LOAD TABLE statement to load content into a regular view. The view must be based on a base table and its definition must be the following:
CREATE VIEW [owner.]view-name AS SELECT * FROM base-table-name
To support this update, the GRANT statement now allows you to grant the LOAD privilege to a view.
See:
In the rare case where one of the delimiters within the location clause must be interpreted literally, you can specify an escape character that is used to escape the location delimiter. In these cases, this column contains the escape character that was specified during the proxy table creation.
See CREATE TABLE statement and CREATE EXISTING TABLE statement.
A new system privilege, ACCESS DISK INFORMATION, was added to allow a user to access information regarding the total disk size and the remaining disk space by using the sp_disk_info system procedure. This system privilege is included in the user-defined COCKPIT_ROLE role. It is required to run the new sp_disk_info system procedure. See System privileges
When connecting to the database server, clients now use RSA encryption to more securely encrypt the user password before they send it to the database server. This update does not apply to TDS connections. When using the GRANT CONNECT, CREATE USER, or ALTER USER statements to create or alter a user, the password in the statement is not encrypted. The software version of both the client and the database server must be at least version 17.0, build 2000.
When encrypting a database or enabling table encryption in a database, the software now uses Password-Based Key Derivation Function #2 (PBKDF2), which is part of the PKCS#5 standard. To protect encryption keys from brute-force attacks, the software repeatedly applies a cryptographic hash to the encryption key. By default, 2000 iterations are applied. You can change the number of iterations that are applied when you create the encrypted database.
The following features were changed to support this update:
New -kdi option for the Initialization utility (dbinit) and Unload utility (dbunload) . See Initialization utility (dbinit).
New -li option for the Initialization utility (dbinit) . See Unload utility (dbunload).
KEY DERIVATION ITERATIONS clause of the CREATE ENCRYPTED DATABASE and CREATE ENCRYPTED FILE statements . See CREATE ENCRYPTED DATABASE statement and CREATE ENCRYPTED FILE statement.
KeyDerivationIterations database property . See List of database properties.
Previously, only server-side certificates, which assure clients that they are connecting to a trusted database server, were supported. Now client-side certificates are also supported, ensuring that clients only connect to a database server if they are using a certificate that is signed by a source that the database server trusts.
See Using client-side certificates for TLS, Encryption (ENC) connection parameter, -ec database server option, identity protocol option, identity_password protocol option, and trusted_certificates protocol option.Setting this option on the client allows the client libraries to trust a root certificate and database server certificate that have either expired or are not yet valid. Setting this option on the database server allows the database server to accept an identity file that has either expired or is not yet valid. See Encryption (ENC) connection parameter, -ec database server option, -xs database server option, Network protocol options, CREATE PROCEDURE statement [Web service], and xp_startsmtp system procedure.
A new connection parameter, ClientFileValidator, allows you to specify when the database server reads local files. If the database server determines that the request is directly from the client, or if a callback returns TRUE, then the file is transferred.
In previous releases, there was a validator in place for some clients that was called when a file request came from an unverified source. This update changes this behavior. To use the previous behavior, set the ClientFileValidator connection parameter to TrustedServer on the client.
See ClientFileValidator (CFV) connection parameter and LOAD TABLE statement.
Additionally, the SYSVIEW compatibility view has a new check_option column.
Previously, web service procedures only returned the first instance of a header value if the HTTP server returned multiple headers with the same name. Now, the HTTP web procedure result set allows multiple headers with the same name by including a third column in the result set that indicates the instance of the header. Web service procedures that specify multiple headers with the same name in the HEADERS clause now send all of those headers to the web server. Upgrade your database to version 17 SP1 to access this update.
A new database property, HttpClientMultipleHeaders, indicates whether the database supports multiple headers with the same name.
See Result set retrieval from a web service, Quick start to using the database server as a web client, List of database properties, Quick start to using the database server as a web client, and Lesson 2: Sending requests from a web client and receiving responses.The EXCEPTIONS option of the SET clause allows you to control status code handling. See CREATE PROCEDURE statement [Web service] and CREATE FUNCTION statement [Web service].
Previously, when the database reached its HTTP/HTTPS connection limit, additional attempts to connect to the database were rejected.
Now, when a database server reaches its HTTP/HTTPS connection limit, additional connections attempts are queued. When a connection becomes available, the first connection in the queue is processed.
Standard connections are not queued. When a database server reaches its connection limit, any new standard connection attempt fails.
The personal database server reserves a minimum of 3 connections for standard connections.
The network database server does not reserve connections for standard connections. Reserve standard connections by using the reserved_connections database option.
The following properties and options were added to support this update:
HttpConnectionsQueued, HttpQueueCount, HttpQueueMaxCount, and HttpQueueTimedOut database server properties. See List of database server properties
reserved_connections option. Seereserved_connections option.
Embedded SQL no longer restricts fetches of DT_VARCHAR, DT_NVARCHAR, and DT_BINARY to a maximum of 32765 bytes. The maximum length is now 32767. Embedded SQL continues to support puts (inserts) of 32767 bytes. See SQLDA sqllen field values when retrieving data, and Embedded SQL data types.
A correction to the Embedded SQL preprocessor (SQLPP) requires that applications using DECL_VARCHAR, DECL_NVARCHAR, or DECL_BINARY be recompiled with the preprocessor.
Embedded SQL now supports wide merges by using the ARRAY clause of the EXECUTE statement [ESQL]. See EXECUTE statement [ESQL], and Wide merges using Embedded SQL.
ODBC now supports wide merges using SQLSetStmtAttr (SQL_ATTR_PARAMSET_SIZE) and SQLBindParameter. See Executing statements with bound parameters.
JDBC now supports wide or batched merges using PreparedStatement.addBatch() and PreparedStatement.executeBatch(). See JDBC batch methods.
Wide merge is now supported in version 4 of the SQL Anywhere C API.
The .NET 4.0 version of the SQL Anywhere .NET Data Provider has been removed. Microsoft no longer provides security updates, technical support or hotfixes for .NET 4.0. Later versions of .NET 4 continue to be supported by the SQL Anywhere .NET Data Provider.
Drivers are provided for versions of Node.js such as 0.10, 0.12, 4.x.y and 5.x.y. See Node.js application programming and XS JavaScript application programming.
Adapters have been implemented for DataNucleus and OpenJPA.
An adapter for SQL Anywhere has been
added to DataNucleus. See http://www.datanucleus.org/products/datanucleus/datastores/rdbms.html.
An
adapter for SQL Anywhere has been written for OpenJPA (adoption by the OpenJPA project is pending). See
https://github.com/sqlanywhere/OpenJPA.
Drivers are provided for versions of Node.js such as 0.10, 0.12, 4.x.y and 5.x.y. See How to set up an OData server.
The minimum size of communication packets allowed by client connections and database servers is now 1000 bytes. Previously 500 bytes was the minimum.
You do not need to upgrade clients to use this update. Older clients with minimum packet sizes less than 1000 that connect to a version 17 SP1 database server use the new minimum value. However, clients running version 17 SP1 software cannot connect to database servers that are running earlier versions of the software and are using a default packet size of less than 1000 bytes.
See CommBufferSize (CBSIZE) connection parameter. and -p database server option.
Previously, geometries could only exceed SRS boundaries by fifty percent in each direction, after which an error was raised. The same error was also raised on round-earth SRS if the geometry points exceeded the allowable range. Geometries can now exceed the SRS boundaries as long as all points can be represented in the SRS coordinate system and the geometry remains trivially valid. Geometries that exceed SRS boundaries by more than fifty percent are not indexable.
A new ST_Geometry method, ST_IsIndexable, detects geometries that are not indexable. See ST_IsIndexable method, List of spatial predicates, and Indexes on spatial columns.
The following changes were made to the Cockpit:
Previously, all connections to the database server, including those made by the Cockpit itself, were included in the connections counts for the database server. Now, connections made by the Cockpit are excluded from the connection counts.
Set up the Cockpit to send email notifications of alerts. Specify which type of alert requires an email notification and set the frequency at which the emails are sent. See Enabling the Cockpit to send alert emails.
When there is low disk space on a drive that contains a database file, such as the database file, dbspace, or transaction log, an alert occurs.
When there is a high percentage of database page look ups that are not satisfied by finding the page in the cache, an alert occurs.
Now the COCKPIT_ROLE user-defined role includes the ACCESS DISK INFORMATION system privilege. To access the full functionality of the Cockpit, the ACCESS DISK INFORMATION system privilege is required. See Upgrading version 17 databases to use the SQL Anywhere Cockpit.
When the Cockpit is running with a temporary database, you can convert it to a permanent database without losing your settings and alert history.
The cockpitdb option for the sa_server_option system procedure, the -cdb database server option, and the CockpitDB database server property were updated to support this update.
See:
The Profiler automatically analyzes your workload and highlights areas you can focus on improve performance. This report addresses questions about whether performance is limited by:
Previously, you could only run comprehensive profiling, which collected information about all activity occurring in your database. Now you can run targeted profiling, which reduces the amount of information collected to database server performance statistics and SQL statements which fulfill specified criteria. Targeted profiling allows you to profile databases, such as those in production environments, for extended periods of time. See Running a targeted profiling session (Profiler).
The Server Load panel includes two new values, Unscheduled requests and Cache Size. The Server Load panel also includes presets for choosing the values to graph, and options for configuring the graph.
When requested by Support, use the Profiler to take a snapshot of your database server's diagnostic information, save it to a file, and then email it to support. You can also use the Profiler to view your database server's statement performance summary.
See Sending SAP Support a snapshot of your database server's diagnostics (Profiler)
When you edit a filter expression in the Edit Filter Expression window, you now have access to and can use all clauses in the filter syntax grammar. As well, the user interface now includes embedded help with examples.
Start the Profiler by clicking Start
Programs
SQL Anywhere
17
Administration Tools
SQL Anywhere
Profiler
. You can also start the Profiler by using the Profiler utility
(dbprof). See Profiler utility (dbprof).
When you execute multi-line SQL statements from a file and one of those statements fails, Interactive SQL reports the file name and line number where the error occurs.
Connect and disconnect from databases and servers by using the toolbar menu items in Interactive SQL.
Specify the -we option to have Interactive SQL treat and display warnings in executed SQL statements as errors. When the -q option is specified Interactive SQL suppresses warnings except when the -we option is also specified. See Interactive SQL utility (dbisql).
Use the allow_expired_certs protocol option to accept a server certificate that has either expired or is not yet valid and continue with the synchronization. See allow_expired_certs MobiLink client network protocol option.
The MobiLink template utility (mltemplate) is a productivity tool for advanced MobiLink developers, which provides assistance in creating MobiLink synchronization scripts and other objects that accompany them. See Advanced feature: MobiLink Template utility.
Use the new MLTEMPLATE_JARS environment variable with this utility. See MLTEMPLATE_JARS environment variable.
The default values for the buffer_size client network protocol option have changed. The default value for Windows and Unix is now 256 KB from 64 KB, and the Android, iOS, Linux ARM, Windows Phone/Store default value has been increased to 64 KB from 16 KB. The Windows Mobile default value remains unchanged at 16 KB. See buffer_size MobiLink client network protocol option .
Use the Fetch array size option to specify the number of rows fetched from an Oracle consolidated database. Increasing the number of rows can improve performance by reducing the number of round trips to fetch data.
Microsoft Windows |
mlsrv17.exe mlserv17.dll |
Unix |
mlserv17.dll libmlserv17_r.so |
When an UltraLite application opens multiple databases, requests on separate databases now run concurrently. See UltraLite concurrency.
This property returns a values that indicates whether the most recent synchronization failed after the upload was sent but before the upload acknowledgement was received from the synchronization server. See UltraLite database properties.
Three new methods have been added to the UltraLite C++ API that allow you to get or set column or parameter values through a byte array: ULResultSet::GetBytes, ULResultSet::SetBytes, and ULPreparedStatement::SetParameterBytes.
The UltraLite for WinRT API is now supported on Microsoft Windows 10.
UltraLiteJ is now supported on Android x86.
As well, UltraLiteJ for Android now supports data synchronization over TCP/IP, including with TLS. Two new classes support this feature: StreamTCPIPParms and StreamTLSParms. These classes set stream parameters for a TCP/IP and TLS synchronization respectively. There are two new constants in class SyncParms: TCPIP_STREAM and TLS_STREAM. These constants create a SyncParms object by the method Connection.createSyncParms.
Looking for API reference documentation? If you installed the documentation locally, use the Windows Start menu to access it (Microsoft Windows), or navigate to it under C:\Program Files\SQL Anywhere 17\Documentation.
You can also access SAP SQL Anywhere API reference documentation on the web at
DocCommentXchange: http://dcx.sap.com
The following documentation is available in the above locations: