Imports bulk data into a database table from an external file.
LOAD [ INTO ] TABLE [ owner.]table-name |view-name [ ( column-name, ... ) ] load-source [ load-option ... ] [ statistics-limitation-option ]
load-source : { FROM filename-expression | USING FILE filename-expression | USING CLIENT FILE client-filename-expression | USING VALUE value-expression | USING COLUMN column-expression }
filename-expression : string | variable
client-filename-expression : string | variable
value-expression : expression
column-expression : column-name FROM table-name ORDER BY column-list
load-option : ALLOW { integer | ALL | NO } ERRORS ] | BYTE ORDER MARK { ON | OFF } | CHECK CONSTRAINTS { ON | OFF } | { COMPRESSED | AUTO COMPRESSED | NOT COMPRESSED } | COMMENTS INTRODUCED BY comment-prefix | COMPUTES { ON | OFF } | DEFAULTS { ON | OFF } | DELIMITED BY string | ENCODING encoding | { ENCRYPTED KEY 'key' | NOT ENCRYPTED } | ESCAPE CHARACTER character | ESCAPES { ON | OFF } | FORMAT { TEXT | BCP | XML row-xpath ( column-xpath,...) [ NAMESPACES namespace ] } | SHAPEFILE | HEXADECIMAL { ON | OFF } | MESSAGE LOG log-target | ORDER { ON | OFF } | PCTFREE percent-free-space | QUOTE string | QUOTES { ON | OFF } | ROW DELIMITED BY string | ROW LOG log-target | SKIP integer | STRIP { OFF | LTRIM | RTRIM | BOTH } | WITH CHECKPOINT { ON | OFF } | WITH { FILE NAME | ROW | CONTENT } LOGGING
statistics-limitation-option : STATISTICS { ON [ ALL COLUMNS ] | ON KEY COLUMNS | ON ( column-list ) | OFF }
comment-prefix : string
encoding : string
log-target : { FILE server-filename | CLIENT FILE client-filename | VARIABLE variable-name }
Use this clause to specify a regular view to load data into. The view definition must be based on a base table. The view definition must be defined as follows:
CREATE VIEW [owner.]view-name AS SELECT * FROM base-table-name
Use this clause to specify one or more columns to load data into. Any columns not present in the column list become NULL if DEFAULTS is OFF. If DEFAULTS is ON and the column has a default value, then that value is used. If DEFAULTS is OFF and a non-nullable column is omitted from the column list, then the database server attempts to convert the empty string to the column's type.
When a column list is specified, it lists the columns that are expected to exist in the file and the order in which they are expected to appear. Column names cannot be repeated. Columns whose names do not appear in the list are set to NULL/zero/empty or DEFAULT (depending on column nullability, data type, and the DEFAULTS setting). Columns that exist in the input file that are to be ignored by LOAD TABLE can be specified using filler() as a column name.
Use this clause to specify the data source to load data from. There are several sources of data from which data can be loaded.
When the LOAD TABLE statement is inside the BEGIN PARALLEL WORK statement, then the load-source parameter only supports the USING FILE clause and the FROM clause.
Use this to specify a file. The filename-expression is passed to the database server as a string. The string is therefore subject to the same database formatting requirements as other SQL strings. In particular:
To indicate directory paths, the backslash character (\) must be represented by two backslashes.
The path name is relative to the database server, not to the client application.
You can use UNC path names to load data from files on computers other than the database server.
Use this clause to load data from a file. This is synonymous with specifying the FROM filename clause.
When the LOAD TABLE statement is used with the USING FILE clause, you can request progress messages.
You can also use the Progress connection property to determine how much of the statement has been executed.
Use this clause to load data from a file on a client computer. When the database server retrieves data from client-filename-expression, the data is not materialized in the server's memory, so the database server limit on the size of BLOB expressions does not apply to the file. Therefore, the client file can be of an arbitrary size.
Client-side data transfers are supported for clients that use the SQL Anywhere database drivers. They are not supported by the Tabular Data Stream (TDS) protocol used by SAP Adaptive Server Enterprise, the jConnect JDBC driver, or SAP Open Client applications.
File name logging is not allowed if the table is being loaded from a client file. If the logging type is not specified, then WITH CONTENT LOGGING is used.
When the LOAD TABLE statement is used with the USING CLIENT FILE clause, you can request progress messages.
You can also use the Progress connection property to determine how much of the statement has been executed.
Use the ClientFileValidator connection parameter to control when the database server is permitted to read and write local files.
Use this clause to load data from any expression of CHAR, NCHAR, BINARY, or LONG BINARY type, or BLOB string. The following are examples of how this clause can be used:
The following syntax uses the xp_read_file system procedure to get the values to load from the target file:
... USING VALUE xp_read_file( 'filename' )...
The following syntax specifies the value directly, inserting two rows with values of 4 and 5, respectively;
... USING VALUE '4\n5'...
The following syntax uses the results of the READ_CLIENT_FILE function as the value:
... USING VALUE READ_CLIENT_FILE( client-filename-expression )
In this case, you can also specify USING CLIENT FILE client-filename-expression since they are semantically equivalent.
If the ENCODING clause is not specified in the LOAD TABLE statement, then encoding for value-expression is assumed to be in the database character set (db_charset) if value-expression is of type CHAR or BINARY, and NCHAR database character set (nchar_charset) if value-expression is of type NCHAR.
Use this clause to load data from a single column in another table. This clause is used by the database server when it replays the transaction log during recovery by replaying the LOAD TABLE...WITH CONTENT LOGGING statements. Transaction log records for LOAD TABLE...WITH CONTENT LOGGING statements comprise chunks of the original input file. When the database server encounters these chunks in the transaction log during recovery, it loads the chunks into a temporary table and then loads all the data from the original load operation.
The following clauses are required in the USING COLUMN clause:
The name of the base or temporary table that contains the column to load data from. When used by the database server during recovery from the transaction log, this is the table that holds the chunks of rows to be parsed and loaded.
The name of the column in table-name that holds the chunks of rows to be loaded.
One or more columns in the destination table used to sort the rows before loading the data. column-list must be a verifiable unique set of values, such as a primary key or a unique index on non-nullable columns included within the column list.
There are several load options you can specify to control how data is loaded. The following list gives the supported load options:
This clause can only be specified once for the statement. The default value for this clause is 0, which means that a violation generates an error and the statement is rolled back. If you specify an integer, n, then on error n+1 the database server rolls back the statement. The values ALLOW NO ERRORS and ALLOW 0 ERRORS are equivalent. This clause allows the database server to set problematic data aside and progress with the load operation.
The database server reports the last error that was encountered to the user, and this error is also logged to the MESSAGE log. Rows that are written to the ROW log can be changed and used as input to a subsequent LOAD TABLE statement.
If a ROW LOG is written to a database server or client file, then its contents are written in the same character set as the original input file. If a MESSAGE LOG is written to a server or client file, then its contents are written in the client's language and in the client connection's CHAR character set. If a ROW or MESSAGE LOG is written to a CHAR or NCHAR variable, then it is written in the CHAR or NCHAR (respectively) character set.
Use this clause to specify whether the server should search for and interpret a byte order mark (BOM) at the beginning of the data. By default, this option is ON. If BYTE ORDER MARK is OFF, then the server does not search for a BOM.
If the ENCODING clause is specified:
If the ENCODING clause is not specified:
Use this clause to control whether constraints are checked during loading. CHECK CONSTRAINTS is ON by default, but the Unload utility (dbunload) writes out LOAD TABLE statements with CHECK CONSTRAINTS set to OFF. Setting CHECK CONSTRAINTS to OFF disables check constraints, which can be useful, for example, during database rebuilding. If a table has check constraints that call user-defined functions that are not yet created, then the rebuild fails unless CHECK CONSTRAINTS is set to OFF.
Use this clause to specify the string used in the data file to introduce a comment. When used, LOAD TABLE ignores any line that begins with the string comment-prefix.
Comments are only allowed at the beginning of a new line.
If COMMENTS INTRODUCED BY is omitted, then the data file must not contain comments because they are interpreted as data.
Specify COMPRESSED if the data being loaded is compressed in the input file. The database server decompresses the data before loading it. If you specify COMPRESSED and the data is not compressed, then the LOAD fails and returns an error.
Specify AUTO COMPRESSED to allow the database server determine whether the data in the input file is compressed. If so, the database server decompresses the data before loading it.
Specify NOT COMPRESSED to indicate that the data in the input file is not compressed. You can also specify NOT COMPRESSED if the data is compressed, but you don't want the database server to decompress it. In this case, the data remains compressed in the database. However, if a file is both encrypted and compressed, then you cannot use NOT ENCRYPTED without also using NOT COMPRESSED.
By default, this option is ON, which enables recalculation of computed columns. Setting COMPUTES to OFF disables computed column recalculations. COMPUTES OFF is useful, for example, if you are rebuilding a database, and a table has a computed column that calls a user-defined function that is not yet created. The rebuild would fail unless this option was set to OFF.
The Unload utility (dbunload) writes out LOAD TABLE statements with the COMPUTES set to OFF.
By default, DEFAULTS is set to OFF. If DEFAULTS is OFF, then any column not present in the list of columns is assigned NULL. If DEFAULTS is set to OFF and a non-nullable column is omitted from the list, then the database server attempts to convert the empty string to the column's type. If DEFAULTS is set to ON and the column has a default value, then that value is used.
Use this clause to specify the column delimiter string. The default column delimiter string is a comma; however, it can be any string up to 255 bytes in length (for example, ... DELIMITED BY '###' ...). The delimiter you specify is a string and should be quoted. To specify tab-delimited values, you could specify the hexadecimal escape sequence for the tab character (9), ... DELIMITED BY '\x09' ....
Use this clause to specify the character encoding used for the data being loaded into the database. The ENCODING clause cannot be used with the BCP format.
If a translation error occurs during the load operation, then it is reported based on the setting of the on_charset_conversion_failure option.
Specify the BYTE ORDER clause to interpret a byte order mark in the data.
If the ENCODING clause is specified:
If the ENCODING clause is not specified:
Use this clause to specify encryption settings. When loading encrypted data, specify ENCRYPTED KEY followed by the key used to encrypt the data in the input file. The key can be either a string or a variable name.
Specify NOT ENCRYPTED to indicate that the data in the input file is not encrypted. You can also specify NOT ENCRYPTED if the data is encrypted, but you don't want the database server to decrypt it. In this case, the data remains encrypted in the database. However, if a file is both encrypted and compressed, then you cannot use NOT ENCRYPTED without also using NOT COMPRESSED.
Use this clause to specify the escape character used in the data. The default escape character for characters stored as hexadecimal codes and symbols is a backslash (\), so \x0A is the line feed character, for example. This can be changed using the ESCAPE CHARACTER clause. For example, to use the exclamation mark as the escape character, you would enter:
ESCAPE CHARACTER '!'
It is recommended that the string you specify for the escape character is no longer than one multibyte character.
Use this clause to control whether to recognize escape characters. With ESCAPES turned ON (the default), characters following the escape character (which defaults to \) are recognized and interpreted as special characters by the database server. Newline characters can be included as the combination \n, and other characters can be included in data as hexadecimal ASCII codes, such as \x09 for the tab character. A sequence of two backslash characters ( \\ ) is interpreted as a single backslash. A backslash followed by any character other than n, x, X, or \ is interpreted as two separate characters. For example, \q inserts a backslash and the letter q. It is recommended that the string you specify for the escape character is no longer than one multibyte character.
If you choose FORMAT TEXT (the default), input lines are assumed to be characters (as defined by the ENCODING option), one row per line, with values separated by the column delimiter string.
Specify FORMAT BCP to load Adaptive Server Enterprise-generated BCP out files.
Specify FORMAT SHAPEFILE to load ESRI shapefiles. The shapefile must be on the database server computer and must be loaded using FROM filename-expression or USING FILE filename-expression, where filename-expression refers to an ESRI shapefile with the .shp file extension. The associated .shx and .dbf files must be located in the same directory as the .shp file, and have the same base file name.
For FORMAT SHAPEFILE, the encoding defaults to ISO-8859-1 if the ENCODING clause is not specified.
If you specify FORMAT SHAPEFILE, then the following load options are allowed:
The LOAD TABLE statement gets the SRID from the second column type that you are loading into. For example, if you created the second column with type ST_Geometry(SRID=4326), then the geometries are loaded using SRID 4326. If your second column has type ST_Geometry (no explicit SRID), then the geometries are loaded using SRID 0.
If you specify FORMAT XML, then the following load options are allowed:
If you use FORMAT XML, then the input file is parsed in the same way as a query that uses the OPENXML operator. The arguments of the SQL statement correspond to the system procedure parameters as follows:
LOAD TABLE statement clause | OPENXML operator argument | Details |
---|---|---|
row-xpath | xpath | |
- | flags | There is no way to specify a value with FORMAT XML that corresponds to the flags argument of OPENXML. |
NAMESPACES | namespaces |
The FORMAT XML clause uses the following parameters:
A string or variable containing an XPath query. XPath allows you to specify patterns that describe the structure of the XML document you are querying. The XPath pattern included in this argument selects the nodes from the XML document. Each node that matches the XPath query in the row-xpath argument generates one row in the table.
Metaproperties can only be specified in FORMAT XML clause row-xpath arguments. A metaproperty is accessed within an XPath query as if it was an attribute. If namespaces is not specified, then by default the prefix mp is bound to the Uniform Resource Identifier (URI) urn:sap-com:sa-xpath-metaprop. If namespace is specified, then this URI must be bound to mp or some other prefix to access metaproperties in the query. Metaproperty names are case sensitive. The following metaproperties are supported:
returns an ID for a node that is unique within the XML document. The ID for a given node in a given document may change if the database server is restarted. The value of this metaproperty increases with document order.
returns the local part of the node name, or NULL if the node does not have a name.
returns the prefix part of the node name, or NULL if the node does not have a name or if the name is not prefixed.
returns the URI of the namespace that the node belongs to, or NULL if the node is not in a namespace.
returns a subtree of the XML document in XML form. For example, when you match an internal node, you can use this metaproperty to return an XML string, rather than the concatenated values of the descendant text nodes.
A string or variable that specifies the schema of the result set and how the value is found for each column in the result set. If a FORMAT XML clause expression matches more than one node, then only the first node in the document order is used. If the node is not a text node, then the result is found by appending all the text node descendants. If a FORMAT XML clause expression does not match any nodes, then the column for that row is NULL.
A string or variable containing an XML document. The in-scope namespaces for the query are taken from the root element of the document.
Use this clause to specify whether to read binary values as hexadecimal values. By default, HEXADECIMAL is ON. With HEXADECIMAL ON, binary column values are read as 0x nnnnnn..., where 0x is a zero followed by an x, and each n is a hexadecimal digit. It is important to use HEXADECIMAL ON when dealing with multibyte character sets.
The HEXADECIMAL clause can be used only with the FORMAT TEXT clause.
This clause can only be specified once for the statement. When an error is encountered while inserting or parsing a row, the database server writes the error to the specified location.
Use this clause to specify whether to sort the data when loading. The default for ORDER is ON. If ORDER is ON, and a clustered index has been declared, then LOAD TABLE sorts the input data according to the clustered index and inserts rows in the same order. If the data you are loading is already sorted, then set ORDER to OFF.
Use this clause to specify the percentage of free space you want to reserve for each table page. This setting overrides any permanent setting for the table, but only for the duration of the load, and only for the data being loaded. The value percent-free-space is an integer between 0 and 100. A value of 0 specifies that no free space is to be left on each page. Each page is to be fully packed. A high value causes each row to be inserted into a page by itself.
The QUOTE clause is for TEXT data only; the string is placed around string values. The default is a single quote (apostrophe).
Use this clause to specify whether strings are enclosed in quotes. When QUOTES is set to ON (the default), the LOAD TABLE statement expects strings to be enclosed in quote characters. If the QUOTES clause is omitted, then the quote character is either an apostrophe (single quote) or a quotation mark (double quote) and the first such character encountered in a string is treated as the quote character for the string. Strings must be terminated by a matching quote.
When QUOTES is set to ON, column delimiter strings can be included in column values. Also, quote characters are assumed not to be part of the value. Therefore, the following line is treated as two values, not three, despite the presence of the comma in the address. Also, the quotes surrounding the address are not inserted into the database.
'123 High Street, Anytown',(715)398-2354
To include a quote character in a value, when QUOTES is set to ON, you must use two quotes. The following line includes a value in the third column that is a single quote character:
'123 High Street, Anytown','(715)398-2354',''''
Use this clause to specify the string that indicates the end of an input record. The default delimiter string is a newline (\n); however, it can be any string up to 255 bytes in length (for example, ROW DELIMITED BY '###'). If you wanted to specify tab-delimited values, then you could specify the hexadecimal escape sequence for the tab character (9), ROW DELIMITED BY '\x09'. If your delimiter string contains a \n, then it matches either \r\n or \n.
This clause can only be specified once for the statement. When an error is encountered while inserting or parsing a row, the database server writes an image of the input row to the specified location in addition to reporting the row to the user.
Use this clause to specify whether to ignore lines at the beginning of a file. The integer argument specifies the number of lines to skip. You can use this clause to skip over a line containing column headings, for example. If the row delimiter is not the default (newline), then skipping may not work correctly if the data contains the row delimiter embedded within a quoted string.
Use this clause to specify whether unquoted values should have leading or trailing blanks stripped off before they are inserted. The STRIP option accepts the following options:
Do not strip off leading or trailing blanks.
Strip leading blanks.
Strip trailing blanks.
Strip both leading and trailing blanks.
The STRIP behavior is tied to the QUOTES clause. If you specify QUOTES OFF, then STRIP OFF, STRIP LTRIM, STRIP RTRIM, and STRIP BOTH work exactly as their wording suggests. If you do not specify a QUOTES clause or you specify QUOTES ON, then unquoted strings are always left-trimmed and right-trimmed (however, you can specify STRIP OFF or STRIP LTRIM if you don't want the strings to be right-trimmed as well).
Use this clause to specify whether to perform a checkpoint. The default setting is OFF. If this clause is set to ON, then a checkpoint is issued after successfully completing and logging the statement. If this clause is set to ON and the database requires automatic recovery before a checkpoint is issued, then the data file used to load the table must be present for the recovery to complete successfully if you use FILE NAME LOGGING. If WITH CHECKPOINT ON is specified and recovery is subsequently required, then recovery begins after the checkpoint, and the data file does not need to be present.
The data files are required, regardless of what is specified for this clause, if the database becomes corrupt and you must use a backup and apply the current log file if you use FILE NAME LOGGING.
When the LOAD TABLE statement is inside the BEGIN PARALLEL WORK statement, the WITH CHECKPOINT ON clause is not supported. However a checkpoint executes at the beginning of a BEGIN WORK PARALLEL statement.
Use this clause to control the level of detail logged in the transaction log during a load operation.
The levels of logging are as follows:
The WITH FILE NAME LOGGING clause causes only the LOAD TABLE statement to be recorded in the transaction log. To guarantee consistent results when the transaction log is used during recovery, the file used for the original load operation must be present in its original location, and must contain the original data. This level of logging has the best performance; however, do not use it if your database is involved in mirroring or on tables referenced by synchronization or replication publications. Also, this level cannot be used when loading from an expression or a client file.
When you do not specify a logging level in the LOAD TABLE statement, WITH ROW LOGGING is the default level when specifying:
The WITH ROW LOGGING clause causes each row that is loaded to be recorded in the transaction log as an INSERT statement. This level of logging is recommended for databases involved in synchronization and is the default for database mirroring when using FROM filename-expression or USING FILE filename-expression. However, when loading large amounts of data, this logging type can affect performance, and results in a much longer transaction log.
If there are no non-deterministic values, WITH CONTENT LOGGING likely results in better performance
This level is also ideal for databases where the table being loaded into contains non-deterministic values, such as computed columns, or CURRENT TIMESTAMP defaults.
When the LOAD TABLE statement is inside the BEGIN PARALLEL WORK statement, this clause is not supported.
The WITH CONTENT LOGGING clause causes the database server to copy the input file to the transaction log in chunks. These chunks can be reconstituted into a copy of the input file later, for example during recovery from the transaction log. When loading large amounts of data, this logging type has a very low impact on performance, and offers increased data protection, but it does result in a longer transaction log. This level of logging is recommended for databases involved in mirroring, or where it is desirable to not maintain the original data files for later recovery provided there are no non-deterministic values.
The WITH CONTENT LOGGING clause cannot be used if the database is involved in synchronization. The WITH CONTENT LOGGING clause is required if the table is being loaded from a client file.
When you do not specify a logging level in the LOAD TABLE statement, WITH CONTENT LOGGING is the default level when specifying:
USING CLIENT FILE client-filename-expression
USING VALUE value-expression
USING COLUMN column-expression
Similarly, when using a LOAD TABLE statement on a primary or root server, WITH CONTENT LOGGING is the default.
When the LOAD TABLE statement is inside the BEGIN PARALLEL WORK statement, this clause is not supported.
Allows you to limit the columns for which statistics are generated during the execution of LOAD TABLE. Otherwise, statistics are generated for all columns. Use this clause only if you are certain that statistics will not be used on some columns. You can specify ON ALL COLUMNS (the default), OFF, ON KEY COLUMNS, or a list of columns for which statistics should be generated.
LOAD TABLE allows efficient mass insertion into a database table from a file. LOAD TABLE is more efficient than the Interactive SQL statement INPUT.
You can execute multiple concurrent LOAD TABLE statements when you run in never-write mode using the -im database server option.
You can concurrently execute a list of LOAD TABLE statements by using the BEGIN PARALLEL WORK statement. When the LOAD TABLE statement is inside a BEGIN PARALLEL WORK statement:
You can only load content into base tables or global temporary tables that can be shared by all connections to the database.
Each LOAD TABLE statement must specify a different target table.
If the wait_for_commit database option is enabled for your connection, you must disable it before executing the BEGIN PARALLEL WORK statement.
LOAD TABLE uses an exclusive schema lock. For base tables, global temporary tables, and local temporary tables, a commit is performed.
If you attempt to use LOAD TABLE on a table on which an immediate text index is built, or that is referenced by an immediate view, then the load fails. This does not occur for non-immediate text indexes or materialized views; however, it is strongly recommended that you truncate the data in dependent indexes and materialized views before executing the LOAD TABLE statement, and then refresh the indexes and views after.
Do not use the LOAD TABLE statement on a temporary table for which ON COMMIT DELETE ROWS was specified, either explicitly or by default, at creation time. However, you can use LOAD TABLE if ON COMMIT PRESERVE ROWS or NOT TRANSACTIONAL was specified.
With FORMAT TEXT, a NULL value is indicated by specifying no value. For example, if three values are expected and the file contains 1,,'Fred',, then the values inserted are 1, NULL, and Fred. If the file contains 1,2,, then the values 1, 2, and NULL are inserted. Values that consist only of spaces are also considered NULL values. For example, if the file contains 1, ,'Fred',, then values 1, NULL, and Fred are inserted. All other values are considered not NULL. For example, '' (a single quote followed by single quote) is an empty string. 'NULL' is a string containing four letters.
If a column being loaded by LOAD TABLE does not allow NULL values and the file value is NULL, then numeric columns are given the value 0 (zero), character columns are given an empty string (''). If a column being loaded by LOAD TABLE allows NULL values and the file value is NULL, then the column value is NULL (for all types).
If the table contains columns a, b, and c, and the input data contains a, b, and c, but the LOAD statement specifies only a and b as columns to load data into, then the following values are inserted into column c:
if DEFAULTS ON is specified, and column c has a default value, the default value is used.
if column c does not have a default defined for it and it allows NULLs, then a NULL is used.
if column c does not have a default defined for it and it does not allow NULLs, then either a zero (0) or an empty string ('') is used, or an error is returned, depending on the data type of the column.
To create histograms on table columns, LOAD TABLE captures column statistics when it loads data. The histograms are used by the optimizer. Following are additional tips about loading and column statistics:
LOAD TABLE saves statistics on base tables for future use. It does not save statistics on global temporary tables.
If you are loading into an empty table that may have previously contained data, then it may be beneficial to drop the statistics for the column before executing the LOAD TABLE statement.
If column statistics exist when LOAD TABLE is performed on a column, then statistics for the column are not recalculated. Instead, statistics for the new data are inserted into the existing statistics. If the existing column statistics are out-of-date, then they are still out of date after loading new data into the column. If you suspect that the column statistics are out of date, then consider updating them either before, or after, executing the LOAD TABLE statement.
LOAD TABLE adds statistics only if the table has five or more rows. If the table has at least five rows, then histograms are modified as follows:
Data already in table? | Histogram present? | Action taken |
---|---|---|
Yes | Yes | Integrate changes into the existing histograms |
Yes | No | Do not build histograms |
No | Yes | Integrate changes into the existing histograms |
No | No | Build new histograms |
LOAD TABLE does not generate statistics for columns that contain NULL values for more than 90% of the rows being loaded.
You can execute a LOAD TABLE statement with a dynamically constructed file name by assigning the file name to a variable and using the variable name in the LOAD TABLE statement.
If disk sandboxing is enabled, then database operations are limited to the directory where the main database file is located.
The required privilege depend on the -gl server option.
If the -gl option is set to ALL, then one of the following must be true:
If the -gl option is set to DBA, then you must have the LOAD ANY TABLE or ALTER ANY TABLE system privilege.
If the -gl option is set to NONE, then LOAD TABLE is not permitted.
When loading into a view, you must have one of the following must be true:
When loading from a file on a client computer:
READ CLIENT FILE privilege is also required.
Read privileges are required on the directory being read from.
The allow_read_client_file database option must be enabled.
The READ_CLIENT_FILE feature must be enabled.
Automatic commit.
Inserts are not recorded in the transaction log file unless WITH ROW LOGGING clause is specified, so the inserted rows may not be recovered in the event of a failure depending upon the logging type. The original file is required if you must recover the rows and WITH FILE NAME LOGGING is used. In addition, the LOAD TABLE statement should not be used without the WITH ROW LOGGING clause in databases that are used as MobiLink clients, or in a database involved in SQL Remote replication, because these technologies replicate changes through analysis of the transaction log file.
The LOAD TABLE statement does not fire any triggers associated with the table.
A checkpoint is carried out at the beginning of the operation. A second checkpoint is performed at the end if WITH CHECKPOINT ON is specified.
Column statistics are updated if a significant amount of data is loaded.
Not in the standard.
Suppose you create a table, myTable, as follows:
CREATE TABLE myTable( a CHAR(100), let_me_default INT DEFAULT 1, c CHAR(100) );
Then you create an input file called c:\temp\input.txt and put the following data in it:
ignore_me, this_is_for_column_c, this_is_for_column_a
Now, you load the data from c:\temp\input.txt into myTable as follows:
LOAD TABLE myTable ( filler(), c, a ) FROM 'c:\\temp\\input.txt' FORMAT TEXT DEFAULTS ON;
The command SELECT * FROM myTable yields the result set:
a | let_me_default | c |
---|---|---|
this_is_for_column_a | 1 | this_is_for_column_c |
The following example executes the LOAD TABLE statement with a dynamically constructed file name, via the EXECUTE IMMEDIATE statement:
CREATE PROCEDURE LoadData( IN from_file LONG VARCHAR ) BEGIN DECLARE path LONG VARCHAR; SET path = 'd:\\data\\' || from_file; LOAD TABLE MyTable FROM path; END;
The following example loads UTF-8-encoded table data from a fictitious file into mytable:
LOAD TABLE mytable FROM 'c:\\temp\\mytable_data_in_utf8.dat' ENCODING 'UTF-8';
In this example, lines in a fictitious file c:\temp\input2.dat that start with // are ignored.
LOAD TABLE GROUPO.Employees FROM 'c:\\temp\\input2.dat' COMMENTS INTRODUCED BY '//'