Page compression is now supported for indicies
create table a (b int) sp_configure 'enable compression',1 create index a1 on a(b) with index_compression = page sp_help 'a' shows a1 b nonclustered, compressed, contain compressed data create index a2 on a(b) local index ip1 with index_compression = PAGE, ip2 with index_compression = PAGE,ip3
Session options are: set compression default|ON|OFF
This command affects only leaf rows that are built for compressed indexes after the command is executed.
create table b (c int) with index_compression = PAGE|NONE
alter table order_line set index_compress = PAGE|NONE
alter table sales modify partition Y2009 set index_compression = PAGE
alter index a.b set index_compression = PAGE|NONE
This applies to a the parallel query where:
SAP ASE version 16.0 and later moves the build part of the SORT operator below the Exchange Operator and keeps the sort-table reading part above the Exchange
The benefits of the new parallel SORT operator are
The parallel SORT operator includes these restrictions:
SAP ASE 16.0 replaces the HASH JOIN operator with the HASH PROBE and the HASH BUILD operators, and includes the replicated EXCHANGE operator between these operators. The HASH BUILD operator builds the hash table and the HASH PROBE operator reads the inner stream and probes the hash table to find matching rows (in earlier releases of SAP ASE, the HASH JOIN operator performed both these steps).
A single worker thread executes the HASH BUILD operator, building a single hash table. The query engine passes this hash table through memory pipes to all producers executing the HASH PROBE operator. These producers share this hash table and probe it for matches to the joining columns in their inner streams. Earlier releases of SAP ASE required multiple producers to execute the HASH JOIN operator.
A query plan must include these attributes for the query processor to use the HASH JOIN improvements:
To improve performance for star join query evaluation these changes have been made in version 16.0:
The query optimizer uses star joins when appropriate. However, SAP does recommend that you enable the set join_bloom_filter option and parallel query processing when using star joins.
SAP ASE version 16.0 introduces the use fact_table abstract plan hint, which specifies the central fact table in a star join query, and triggers special query plan optimization strategies for the query.
PLAN '(use fact_table fact_table_name_or_alias_name)'
The use fact_table abstract plan hint allows the query processor to chose a parallel hash join plan for the star join query. Parallel plans enable the query processor to push bloom filter probings (which allow for faster joins between dimension and fact tables) below the EXCHANGE operator, further reducing the number of qualifying rows from fact tables.
SAP ASE applies dynamic thread assignment to parallel lava query plans that are generated for select queries to use fewer resources
The following commands continue to use static thread assignment:
Dynamic thread assignment improves performance by:
2 enhancements improve load times for SAP Business Warehouse
ASE version 16.0 and later partitions each user log cache into a number of smaller blocks, each the size of the server's logical page size. SAP ASE version 16.0 adds log records directly to the current active block within the user log cache (the same manner that earlier releases add log records directly to the user log cache). When concurrent open transactions make changes to the same data page, instead of moving log records directly from the user log cache to syslogs, SAP ASE adds (or links) the current block within the user log cache to the end of the global queue; it can later transfer the log records from the global queue to syslogs. This enables SAP ASE to efficiently batch additions to syslogs, improving the performance of run-time logging, regardless of issues associated with datarow-locked tables.
After the current block is added to the global queue, a new free block within the user log cache becomes the current block, and continues to accept log records from the transaction.
Use the "user log cache queue size" configuration parameter to enable and disable this functionality.
ASE 16 reduces CPU utilization for latch conflicts with very high transaction rates
There is decreased contention:
The metadata and latch management enhancements are enabled by default, and you need not to perform any configuration to enable these enhancements.
ASE 16 allows setting exp_row_size for tables with only fixed length columns to reduce contention on a single page
ASE 16 increased the default amount of procedure cache used for Engine Local Cache(ELC) from 25% to 50%
This is controlled by new parameter "engine local cache percent"
Additional parameters are "enable large chunk elc" (replaces traceflag 758) and "large allocation auto tune" (replaces traceflag 753)
Defaults are static parameter "enable large chunk elc" enabled, static parameter "large allocation auto tune" enabled
Lock management improvements are:
Locks now indentified by database,object and parttion, -1 = all partitions
Enable with sp_chgattribute object, 'ptn_locking', 1
sp_lock,sp_familylock,monLocks,monDeadLock now include partitionid
Set partition lock promotion thresholds with sp_setpglockpromote_ptn,sp_setrowlockpromote_ptn,sp_ dropglockpromote_ptn,sp_droprowlockpromote_ptn
Partition level operations now only block operations on the same partition, other partitions are no affectedL
alter table x [merge|drop|move|split] partition...with online
truncate table x partition y with online
For partition-level online operations, such as splitting a partition or moving a partition, the table must have a local unique index.
The move command is allowed for concurrent DMLs to all partitions in the table, including ones being operated by split.
With a global index:
Schema locks allow enhanced partition-level operations to update table schema or metadata by achieving isolation from concurrent operations.
The new schema locks are:
The number of tables in a select query has been increased from 50 to 250
The number of subqueries in a select query has been increased from 50 to 250
The maximum number of columns allowed in an order by clause has been increased from 31 to 400
First we need to create a database encryption key (DEK) in the master database which is used to encrypt the database
To create a database encryption key (DEK):
Options for creating a database encryption key are:
create encryption key <name> [ for <algorithm>] for database encryption [with {[master key] [key_length 256] [init_vector random] [[no] dual_control]}
Currently algorithm must be AES and key_length must be 256
init_vector must be randon for full database encryption
Encrpytion key can also be encrypted for dual control
The way the encryption key is protected and the owner can be changed with "alter encryption key"
You must backup the Database Encryption Key,master or dual master key and encrypted database
E.g. use ddlgen to generate the sql for creating the keys
To load an encrypted database restore the master key and DEK, create the database for encryption with the same DEK as the original database
load database with verify only = full is not supported as the backup server cannot decrypt
You cannot mount/unmounted encrypted database
Keys can be dropped via "drop encryption key"
create database <name> encrypt with <keyname>
You cannot encrypt an in-memory database
An existing database can be encrypted via:
alter database <name> encrypt with <keyname> [parallel N]
alter database <name> resume encryption [parallel N]
alter database <name> suspend encryption
To check if a database is encrypted as well as progress either use sp_helpdb or
Databases can be marked as sensitive and have residual data removed (zero-ed out)
Residual data cannot be removed from system databases e.g. master,sybsystemdb,sybsystemprocs
sp_dboption dbname, "erase residual data", true # Database level create table a (b int) with "erase residual data" {on | off} # Table level - overrides database level alter table a set "erase residual data" {on | off} # Table level set erase_residual_data {on | off} # Session level - overrides database and table level settings
Full Text Auditing prints parameter names and values with sensitive parameters masked when DML Auditing is enabled
Full-Text Auditing is recorded for select,insert,update,delete,select into
ASE-16_0/bin/auditinit Restart ASE sp_configure 'auditing',1 sp_audit "update", "all", "a", "on" insert into a values(1) update a set b=2 where b=1 use sybsecurity select extrainfo from sysaudits_01 sa_role sso_role oper_role sybase_ts_role mon_role; update a set b=2 where b=1; ; ; ; ; sa/ase;
See also Security Administration Guide
The audit option sproc_auth enables auditing for authorization checks that are performed inside system stored procedures.
The audit event 80 is audited when the audit option security is enabled, or when the audit option sproc_auth is enabled.
The audit event 146 is only audited when the option sproc_auth is enabled.
Multiple triggers can be created and the order in which they fire can be controlled
Up to 50 triggers for each command (insert,update,delete) can be created and the new "order" parameter specified.
Multiple triggers can be created without an "order" clause
create or replace trigger [owner].trigger_name on [owner].tablename for [insert|delete|update] [order <integer>] as <sql_statement>
Triggers created without an order have an order of 0 and fire after triggers defined with an order
The set of triggers created without an order fire as a set in a undefined order
The "order" clause cannot be used with "instead of" triggers
Attempting to create a duplicate "order" number is an error
To change the "order" number for a trigger recreate the trigger - potentially via "create or replace"
The "merge" statement fires triggers in a special order (insert,update,delete) which overrides the "order" number
Within each operation e.g. "insert" the triggers are fired in "order" number sequence
A rollback statement in a trigger rolls back the previous triggers and does not fire any more triggers
Multiple triggers can be enabled/disabled via alter table tablename [disable|enable] trigger triggername
Global variable @@triggername contains the name of the currently executed or last executed trigger
SAP ASE 16 with SAP Replication Server 15.7 SP200 adds several enhancements
ASE 16 adds a native ODBC interface to CIS which allows direct connection to HANA
The SAP HANA Client Package must be installed on the same machine as ASE
In Windows use the "HDBODBC" ODBC driver
Also add the HANA Server to the interfaces file
UNIX: HANA_DB query tcp ether 157.133.66.75 1870 libodbcHDB.so Windows: [HANA_DB] Query=NLWNSCK,157.133.66.75,1870,HDBODBC
SAP ASE uses a Pluggable Component Interface (PCI) Bridge which implements on-demand software dispatching, to load shared objects when it invokes a target function.
ODBC Driver Manager is typically bootstrapped from the pluggable component adapter for ODBC (PCA/OBDC), which is configured with PCI Bridge.
PCA/ODBC acts as a broker, managing service requests between the SAP ASE and the ODBC Driver Manager.
PCA/ODBC forwards and controls requests in both directions—from the SAP ASE to ODBC Driver Manager, and vice versa.
SAP ASE requires the sybpcidb database when you enable PCI Bridge. The sybpcidb contains all configuration data for PCI Bridge and its associated PCAs, such as PCA/ODBC.
Data type mapping are listed at Datatype Mapping Between SAP ASE and HANA
Restrictions are listed at Restrictions
create or replace has been added to the following commands
create or replace is only supported for objects which do not contain data
When an object is replaced, SAP ASE replaces its definition in the following system tables: sysprocedures, syscomments, sysdepends, and syscolumns. Some fields in the sysobjects table are also updated. The query tree for the object is normalized before being replaced in sysprocedures.
With granular permissions enabled or disabled, you must be the object owner to replace a compiled object. You cannot replace a compiled object by impersonating the object owner through an alias or setuser. However, if you are the owner through set proxy, you can replace a compiled object.
Stored procedures used in install scripts have been changed to use create or replace.
The default sizes for system databases have been increased to account for the additional log/data space requirements for create or replace
Default system database size changes are listed at Data and Log Segment Changes
The Compression Advisor can be used to provide estimates space savings for row/page level and column level compression
The SCC backup scheduling and task management features can be used to control backups
A new Create wizard is provides to implement these commands
Enable and disable partition locking and provide ‘online’ option for move, merge,split, delete partition operations.
Enhanced Create Table and Create Index wizards to support index compression
Error logs can be imported from multiple servers
Alerts can be configured when the percentage utilization of any resource (sp_monitorconfig) exceeds a configurable threshold
Support up to 250 managed resources
Allow removal of residual data when a database,table or index is dropped
Enhance the Monitoring View for SAP ASE with metrics for data cache spinlock contention.
The sp_confighistory system procedure manages the history of configuration changes, and stores data about the changes in the sybsecurity database.
Tracked items include
The sybsecurity database must be installed to track these changes
sp_confighistory displays SAP ASE configuration changes, including which configuration option has been changed, the old and new values, the user who made the change, and when the change was made.
The ch_events view can be queried in the sybsecurity database to see these changes
The ch_events view collects information from audit_tables can be give errors if audit tables are dropped/created
Use sp_confighistory create_view to update ch_events when you add or remove audit tables.
To enable Configuration History Tracking:
The ch_events view does not record changes if the new value is the same as the old value.
CRCs can be added to database dumps
dump database database_name to dump_device with compression=n,verify={crc | read_after_write}
load database database_name from dump_device with verify[only]=crc
This feature requires a version of Backup Server which includes this functionality otherwise dumps are not readable as the backup format changes.
sp_logging_rate displays the minumum, maximum, and average rate of transaction log growth, in gigabytes per hour, for the period of time you run the system procedure, providing the result as an averaged sum of the calculations, or as iterative results.
Display a summary the log growth for the transaction log over a 24 hour period, calculating the growth in one-hour intervals: sp_logging_rate 'sum', '1,00:00:00', '01:00:00' ========================= Total Summary Information ========================= Transaction Log Growth Rate Min GB/h Max GB/h Avg GB/h --------------------------- -------------- -------------- -------------- 0.000000 1.970084 1.566053
SAP ASE 16 allows administrators to configure, record and list SAP ASE Resource Governor threshold violation events.
This new feature uses the monTresholdEvent table to record and report the thresholds for events and records all violations of configured resource limits.
Set the report action using sp_add_resource_limit
In previous releases of SAP ASE certain update statistics options ware attached to a column once they were executed for the first time as their ‘stickiness’ bits were automatically set in the catalog. A new feature in an upcoming release of SAP ASE 16, will provide a way to report and unset this stickiness behavior.
ddlgen can create DDL of fully encrypted databases and database encryption keys.
sybmigrate can migrate fully encrypted databases - use ddlgen to create the same encryption keys on source and target
sybrestore can now handle master database corruption using srvbuild(UNIX)/sybatch(Windows) to rebuild the instance and the restoring user databases.
dbisql adds an improved Excel import/export capability through a generic ODBC plug-in and offers more options to control results and message output
Additionally, it offers an improved Connect Dialog, which can now encrypt password by default, start a discovered/down SAP ASE, and connect to the last-used database, instead of default database.
With the introduction of DTA, the Query Plan and Execution Statistics in HTML feature has been updated to correctly reflect the new parallel execution model.
The HTML representation reports the execution statistics per plan fragment execution (or work units), allowing several executions of the same plan fragment to be reported separately. To better identify the work unit execution following the new DTA model, for each plan fragment execution, the output is indicated by "Work unit execution" and provides the SPID for the thread and additional values such as the Root operator identifier (for the plan fragment) and producer ID.
The SET STATISTICS QUERY_NAME_HTML command helps differentiate or identify files that are related to multiple executions of the same query.
SAP ASE uses SAP JRE to support Java applications which by default is installed in $SYBASE/shared/SAPJRE-7_*
The installer automatically sets the SAP_JRE7, SAP_JRE7_32, and SAP_JRE7_64 environment variables.
For IBM AIX set the data size resource limit to umlimited when using any Java application: limit datasize unlimited