8. DBA Issues


8.21 How do setup the High Performance Loader (HPL)?

On 23rd Jan 1998 lester@advancedatatools.com (Lester Knutsen) wrote:-

A couple of things, you must run ipload on a x-windows system, and have the DISPLAY envirmoment variable set. Also if you are running it as another user then the one you logged in as we need to issue the "xhost + " command to allow access to the display. (e.g logging in as lester, then su to informix and try to run ipload will fail unles I have executed the xhost command)

8.22 How do access NT databases from UNIX?

On 13th Oct 1998 helmut.leininger@bull.de (Helmut Leininger) wrote:-

Between Unix systems I would set up .rhosts or hosts.equiv. But how can I do something equivalent on NT systems?

I found the solution myself. You have to put a hosts.equiv file into c:\winnt\system32\drivers\etc.

8.23 How do use SMI tables to locate logical logs?

On 19th Feb 1998 aroustv@towers.com (Vardan Aroustamian) wrote:-

 
select  number, uniqid, physchunk(physloc) physloc, chknum, name
from syslogfil l, syschktab c, sysdbstab d
where    physchunk(physloc) = c.chknum and
        c.dbsnum = d.dbsnum
order by 1;

It's working in OnLine 7.2x

8.24 How do use SMI tables to find table space usage?

On 3rd Mar 1998 Martin.Berns@Materna.De (Martin Berns) wrote:-

the following select (against sysmaster) should do what you want:

select s.name dbspace, n.dbsname database,
sum(ti_nptotal) total,  sum(ti_npused) used,
sum(ti_npdata) data, sum(ti_npused) - sum(ti_npdata) idx
 from systabinfo i,systabnames n, sysdbspaces s
  where i.ti_partnum = n.partnum and partdbsnum(i.ti_partnum)=s.dbsnum 
group by 1,2
order by 1,2

8.25 Why does oncheck warn me about more than 8 extents?

On 10th Mar 1998 satriguy@aol.com (SaTriGuy) wrote:-

If you are on anything since 7.11, I wouldn't worry too much about the "less than 8 extent" rule. In version 5, we used an extent table in memory for each tablespace which was only 8 entries large. For any extent past the eighth, we had to examine the systables table for that database directly. This is why we made such a big deal about the "eight extents".

With 7.11+, we dynamically allocate memory so that all of the extents are in memory. Thus the "less than extents" rule is not nearly so important.

However, if you are really going to have this many tables on a 7.1x or 7.2x system, you might want to increase your in memory dictionary somewhat. This is managed by the DD_HASHSIZE and DD_HASHMAX onconfig variables. These are undocumented so you will need to contact tech support to get information on how to use them.

8.26 Why does the High Performance Loader give no errors?

On 20th Jan 1998 jparker@epsilon.com (Jack Parker) wrote:-

Check the violations tables. If you did not name them specifically they will have a name of table_vio and table_dia in your database - these should have the rows (in the vio table) and the reason (in the dia table). You can join the two tables using the tuple_id and look up the reason from the HPL manual.

8.27 Why does fragment elimination using dates fail?

On 1st May 1998 richard_thomas@yes.optus.com.au (Richard Thomas) wrote:-

There is a limitation referred to in TFM that warns against using 2-digit century in fragment expressions - and I realise you're not doing that. But the manual (IMHO) does not go far enough to warn you that the fragment expression seems to be stored with the $DBDATE that's effective at the time of the ALTER FRAGMENT execution. Subsequent queries that use a different $DBDATE run into all sorts of problems.

We have a number of fragmentation strategies that rely on date-ranges (eg 5 days to a DBspace etc) and used to have all manner of grief with performance and elimination etc.

I found the solution to this is to manipulate the fragment expressions using the Informix internal representation of the date (ie 1998-05-01 35915). Whilst it reduces the 'readability' of the dbschema and sysfragments info, all our fragment elimination problems immediately disappeared.

So try:

FRAGMENT BY EXPRESSION
> inv_dte = 35519 IN dbs01
> inv_dte = 35526 IN dbs02

as a work-around and see how you go.

A simple way to perform the translation from DATE to INTEGER is to do this

SELECT date_field, TRUNC(DATE(date_field)) AS int_value
  FROM ...

8.28 When dbimporting why do I get string to date errors?

On 5th July 1998 ccremid@saadev.saa.noaa.gov (Tino) wrote:-

Check the DBDATE environment variable used at the time of dbexport and see if its value matches the one that you are using at the time of dbimport.

8.29 How can I speed up sorts and index builds?

On 11th May 1998 kagel@bloomberg.com (Art S. Kagel) wrote:-

You have stumbled on a little known fact. Sorting is faster to filesystem space than to temp table space. And since you do not care about the safety of those sort-work files that's OK.

Even better still, for large sorts like index builds and UPDATE STATISTICS, set the environment variable PSORT_DBTEMP to as many different filesystems as possible (at least 3, keeping in mind that the smallest will limit the size of the sort that is possible). Also, contrary to the documentation, setting PSORT_NPROCS to a value between 20 and 40 will also speed the sorting as long as you can afford to allocate so many resources to that one task. You might see the time drop to 15 or 20 minutes.

8.30 How can I handle duplicates when loading data?

On 14th May 1998 rajam@worldnet.att.net (Idiot) wrote:-

You did not mention, what version of Informix you are using. Assuming you are using 7.22. Do the following,


SET INDEXES idx_name DISABLED;
START VIOLATIONS TABLE FOR tab_name;
SET INDEXES idx_name ENABLED even better, SET INDEXES idx_name FILTERING
WITHOUT ERROR
load data

All the duplicate records will go to the violations and diagnostic table. You can debug the violations table to correct the offending records and load them into the main table.

You will have to have:

DBA privilege or

Owner of the target table and have resource privilege OR

Alter privilege on the target table and have resource privilege.

Read the documentation for START VIOLATIONS TABLE in the syntax guide.

8.31 How do I dbexport when CDR is enabled?

On 20th May 1998 sch@pdx.informix.com (Stephen Hemminger) wrote:-

Some of this is fixed in 7.3.

The trick is to bring server up with the environment variable CDRBLOCKOUT set (any value will do). Then do the dbexport, then restart server without environment set.


example:
% onmode -yk			# shutdown server
% CDRBLOCKOUT=on oninit -iy	# start server
% dbexport
% onmode -yk
% oninit -iy

Note: any transactions that happen while server is running with CDRBLOCKOUT will not be replicated!

8.32 Why can the DBA not grant permissions on this view?

On 23rd Jun 1998 davek@summitdata.com (David Kosenko) wrote:-

Peter Lancashire offerred:


+No doubt this is simple but I can't see it.
+
+The following was all done as the DBA user.
....
+I granted privileges on all tables in the database like this:
+grant select on  to query with grant option;
+
+Then I created a view for user "query" as below. The view includes
+several tables and an expression. The view works OK.
+create view query.ptreatments (...) as select ...;
+
+Then I attempted this and it failed:
+grant select on query.ptreatments to ukkiy;
+#                                       ^
+#  302: No GRANT option or illegal option on multi-table view.
+#
+
+User ukkiy has select privilege on all the tables used by the view,
+although I do not think that is relevant.

You, even as DBA, do not have GRANT privs on the view - you gave them away when you created the view as owned by user query. Run the GRANT as user query, and it should work ok. While you are at it, as user query GRANT ALL ON ptreatments TO DBA;

8.33 Can I run ontape to a disk file?

On 29th Oct 1998 clem@informix.com (Clem Akins) wrote:-

Ontape to a file *IS* supported by Informix. It has been for a while now. You can quote June Tong's internal TechInfo (not the one available to customers via the web site, but an internal one) entry #6125 to any Informix Technical Support engineer who still thinks otherwise. (Thanks, June!)

However, the ontape program expects a tape drive, and behaves accordingly. It is *your* responsibility to do all the things necessary to satisfy the program's expectations and requirements. (Things like simulating an operator pressing return, switching files when the tape size limit is reached, handling rewind device expectations, log file output and overflow, etc.) You can find some example shell scripts which perform these functions at the web site of the International Informix Users Group at www.iiug.org

The question of how large the output file can become without an error is highly dependent on O/S version as well as the version of ontape. In Informix Dynamic Server versions <7.2 there was a definite 2GB limit on the filesize that ontape would support. In versions >=7.2 the limit comes from the O/S, and may still be 2GB. It is not too difficult to write a program that would shuffle disk files once they reach the maximum size, convincing ontape that an operator is changing tapes. Of course, the challenge comes in labeling and saving these files well enough to supply them to ontape in the right order at restore time.

As with any critical system, you should *thoroughly* test the archive and restore functions, including your shop's procedures for saving disk layout information, documenting the process well enough for someone besides yourself to handle a disaster, and ensuring that management understands that you have done your job as a DBA and deserve a raise. (The flip side is that if you haven't, and disaster strikes, you'll be looking for a job.)

8.34 How can I list all tables in a dbspace?

On 19th Oct 1998 tschaefe@mindspring.com (Tim Schaefer) wrote:-

This is one I use for XPS. Your challenge should you decide to accept is to remove the dbslice layer of the problem, or simply use it with XPS as it is. I know your intent is probably for 7.x, but I present this not just for you, but for others out there who may be using XPS. To be sure, this solution serves only a minority of you out there. But the future is coming, especially now with our new friends from RedBrick.

XPS works at one extra layer beyond that which exists for the 7.x engine. DBslices are logical groupings of dbspaces across nodes.


Dbslice
   +
   +-dbspace
   +-dbspace
         +-table
         +-table
   +-dbspace
Dbslice
   +
   +-dbspace
         +-table
         +-table
   +-dbspace
         +-table
         +-table
   +-dbspace
   +-dbspace

I would challenge Informix to show table level information like this in the IECC for XPS. Many of you out there don't realize this, but there are no less than 3 IECC programs, probably more. One for 7.x, one for XPS that points to UNIX, and one for XPS that works only with NT. :-)

The code presented would allow a DBA the total picture, not stopping like it currently does at the dbspace. Some of the most important priorities a DBA has are in understanding where things are, how much space is available, and how much space is used. Currently only slices and spaces are shown in the IECC, but table information is also necessary.



# BEGIN

#!/bin/sh
################################################################################
# begin doc
#
#     Program: XDBtree
#
#      Author: Tim Schaefer
#              Data Design Technologies, Inc.
#              www.datad.com
#
#       Login: tschaefe@mindspring.com
#
#     Created: May 1998	
#
# Description: XDBtree reports on tables in dbspaces.
#              The report is based on your ONCONFIG setting.  
#
#       Usage: XDBtree
#
# end doc
################################################################################
# sysdbslices
# 
# Column name          Type                                    Nulls
# dbslice_num          smallint                                yes
# name                 char(18)                                yes
# ndbspaces            smallint                                yes
# is_rootslice         integer                                 yes
# is_mirrored          integer                                 yes
# is_blobslice         integer                                 yes
# is_temp              integer                                 yes
#
################################################################################
# syscmdbspaces
# 
# Column name          Type                                    Nulls
# 
# dbsnum               smallint                                yes
# name                 char(18)                                yes
# fchunk               smallint                                yes
# nchunks              smallint                                yes
# home_cosvr           smallint                                yes
# current_cosvr        smallint                                yes
# dbslice_num          smallint                                yes
# dbslice_ordinal      smallint                                yes
# is_root              integer                                 yes
# is_mirrored          integer                                 yes
# is_blobspace         integer                                 yes
# is_temp              integer                                 yes
#
################################################################################

get_db_info()
{
date

dbaccess sysmaster  2>/dev/null <<+

set isolation to dirty read;

unload to /tmp/systree.dat
select
       sysdbslices.name,
       syscmdbspaces.name,
       syscmdbspaces.dbslice_num,
       syscmdbspaces.dbsnum,
       syscmdbspaces.fchunk,
       sysextents.dbsname  ,
       sysextents.tabname  ,
       sysextents.start_chunk   ,
       sysextents.start_offset  ,
       sysextents.size
  from syscmdbspaces, sysdbslices, sysextents
 where sysdbslices.dbslice_num = syscmdbspaces.dbslice_num
   and sysextents.start_chunk  = syscmdbspaces.fchunk
 order by 
       sysdbslices.name, 
       syscmdbspaces.name,
       sysextents.start_chunk,
       sysextents.start_offset,
       sysextents.tabname 
+
}
################################################################################

produce_rpt()
{

awk -F"|" ' BEGIN {
      dbslice_name=""  ;
      dbspace_name=""  ;
       dbslice_num=""  ;
       dbspace_num=""  ;
            fchunk=""  ;
           dbsname=""  ;
           tabname=""  ;
       start_chunk=""  ;
      start_offset=""  ;
              size=""  ;
     ldbslice_name=""  ;
     ldbspace_name=""  ;
      ldbslice_num=""  ;
      ldbspace_num=""  ;
           lfchunk=""  ;
          ldbsname=""  ;
          ltabname=""  ;
      lstart_chunk=""  ;
     lstart_offset=""  ;
             lsize=""  ;
         size_cntr=0   ;
                  }
{
      dbslice_name=$1  ;
      dbspace_name=$2  ;
       dbslice_num=$3  ;
       dbspace_num=$4  ;
            fchunk=$5  ;
           dbsname=$6  ;
           tabname=$7  ;
       start_chunk=$8  ;
      start_offset=$9  ;
              size=$10 ;

{ if ( tabname       == "TBLSpace" )   { { tabname = "" } } }
{ if ( ldbslice_num  == dbslice_num )  { { dbslice_num = "" } } }
{ if ( ldbslice_name == dbslice_name ) { { dbslice_name = "" } } }
{ if ( ldbspace_num  == dbspace_num )  { { dbspace_num = "" } } }
{ if ( lstart_chunk  == start_chunk )  { { start_chunk = "" } } }
{ if ( ldbspace_name == dbspace_name ) { { dbspace_name = "" } } }
{ if ( dbspace_name  == dbsname )      { { dbsname = "" } } }
{ if ( ldbsname      == dbsname )      { { dbname = "" } } }
{ if ( ltabname      == tabname )      { { tabame = "" } } }

{ printf( "%3s %-18s %3s %3s %-18s %-18s %-18s %10s %10s\n", dbslice_num, dbslice_name, dbspace_num, start_chunk, dbspace_name, dbsname, tabname, start_offset, size  ) }
# { printf( "%3s %-18s %3s %3s %s %s %-20s %10s %10s\n", dbslice_num, dbslice_name, dbspace_num, start_chunk, dbspace_name, dbsname, tabname, start_offset, size  ) }

          last_chk=$1 ;
     ldbslice_name=$1 ;
     ldbspace_name=$2 ;
      ldbslice_num=$3 ;
      ldbspace_num=$4 ;
           lfchunk=$5 ;
          ldbsname=$6 ;
          ltabname=$7 ;
      lstart_chunk=$8 ;
     lstart_offset=$9 ;
             lsize=$10;

}

' /tmp/systree.dat
}

################################################################################

>/tmp/systree.dat
get_db_info
produce_rpt
# >/tmp/systree.dat

# END

8.35 Is there anything faster than dbexport?

On 6th May 1998 richard_thomas@yes.optus.com.au (Richard Thomas) wrote:-

I'm presuming that ontape/onbar etc aren't going to do it for you. There is an alternative, but it requires a little bit of work on your behalf. I hate dbexport - if your database is bigger than about 15GB, it is hopelessly inefficient. It also doesn't let you move tables around and play with extent sizes etc prior to execution (in v5 you couldn't even use -ss). Here's what I did when we migrated from v5 to v7 a while back:

Generate a full dbschema and edit it to suit your new DBspaces, extents etc. Ensure logging is turned off and run it on your new instance. Then on the old instance, execute the following:


UNLOAD TO "table.data" DELIMITER "|"
SELECT tabname, ncols, ncols * nrows
  FROM systables
  WHERE tabid > 99
ORDER BY 3 DESC;

The order by is to get the tables in rough order from largest to smallest. Then use awk or perl to read this file and generate 4 unload scripts in a round-robin fashion, where each one unloads to a different disk. Depending on what you're trying to achieve, you could possibly use onunload instead. Something like this:


perl -e '
  open(Unld0, "> unload_1.sql") || die;
  open(Unld1, "> unload_2.sql") || die;
  open(Unld2, "> unload_3.sql") || die;
  open(Unld3, "> unload_4.sql") || die;
  open(Dbld0, "> unload_1.cmd") || die;
  open(Dbld1, "> unload_2.cmd") || die;
  open(Dbld2, "> unload_3.cmd") || die;
  open(Dbld3, "> unload_4.cmd") || die;
  @dsk =3D ("/disk_a","/disk_b","/disk_c","/disk_d");  #array of disks =
for writes
  while(<>){
    ($tab, $cols) =3D split(/\|/);
    $r=3D$n++%4;   #modulo to cycle through scripts: $n =3D Row number
    $uno=3Dsprintf("Unld%d", $r);
    $dno=3Dsprintf("Dbld%d", $r);
    printf $uno "UNLOAD TO %s/%s.unl DELIMITER \"|\"\n", $dsk[$r], $tab;
    printf $uno "  SELECT * FROM %s;\n\n",$tab;

    printf $dno "FILE %s/%s.unl DELIMITER \"|\" %d;\n", $dsk[$r], $tab, =
$cols;
    printf $dno "INSERT INTO %s;\n\n", $tab;
  }' table.data

Then execute the four unload_*.sql scripts in parallel. You can run as many of these as you have disks/capacity. I chose to use four.

As they finish, kick off the corresponding dbload script. One of the other benefits of this method over dbimport is that you don't have to start from scratch if you have a problem. You can fairly easily restart the dbloads from virtually any point.

Actually, when I did this I had one dbload script per table, and three processes polling for completed unloads. As they were identified, the corresponding dbload would be executed. Using this method I moved over 20GB of data in about 10 hours, including rebuilding indexes, statistics etc. The previously attempted dbexport was killed after 56 hours, 'cos we were running out of outage and were nowhere near complete!

8.36 How do I setup OnBar and Legato?

On 15th Oct 1998 jmiller@informix.com (John F. Miller III) wrote:-

Below is a write-up my co-worker did on some quick and usefull steps in getting onbar/HDR/Legato running. I hope they help.

APPENDIX Q OnBar setup with Legato

8.37 How do I get information about tables from sysmaster?

Resident tables

On 13th Jan 1999 Vardan.Aroustamian@chase.com (Vardan Aroustamian) wrote:-

Actually it is last column in onstat -t (flag 2000) You can get that information also from sysmasters

select tabname
from sysptntab p, systabnames n
where p.partnum = n.partnum
      and trunc(flags/8192) = 1;

8.38 How do I use Oncockpit?

On 18th Jan 1999 dmeyer9@email.msn.com (David Meyer) wrote:-

It is good to see people trying to use this extraordinary Informix tool. I use the ALARM functions to notify beginning dba's of critical conditions within the database. Besides being a GUI that allows you view all Informix parameters and operating functions.... it is just a hot product!

Now, for your question, I am not certain where you are in the steps to run oncockpit so I will list a few steps for you to check, I hope this helps... have fun! BTW - a good book that covers all this and more is Carlton Doe's - Informix Online Dynamic Server Handbook.

1) oncockpit is a GUI client-server application. 'onprobe', the server program must be running and communicating with the instance. onprobe is defined as a 'service' with it's own instance alias and corresponding entries in the $SQLHOSTS and /etc/services files.

2) Since 'oncockpit' is a GUI, your DISPLAY environment must be set to your IP address or host name of the machine that the GUI will display to:

3) onprobe and oncockpit have their own command line arguments. I think that you have to have root permissions to run onprobe but I am not sure. An example of how to start onprobe and oncockpit:

$INFORMIXSERVER=shmcci;export INFORMIXSERVER;
onprobe -service cpit_cci -log /home/informix/onprobe_djm.log -severity
severity.djm &

$INFORMIXSERVER=xxxcci;export INFORMIXSERVER;
oncockpit -service cpit_cci -log /home/informix/oncockpit_djm.log -fg
red -bg gray &

4) check the logs that are created to debug any problems runnning the client or server applications. You should really perform a 'ps -eaf |grep onprobe' to verify that the program is running before you execute oncockpit.... remember to check the oncockpitand onprobe logs.

8.39 Where do sqexplain.out's appear under NT?

On 3rd Feb 1999 psilva@informix.com (Paulo Silva) wrote:-

After installing IDS on NT, you should notice a new Shared Folder calledSQEXPLN, under your %INFORMIXDIR% (usually c:\informix\sqexpln).

This folder keeps all outputs for all users, in the form %USERNAME%.out, instead of the usual sqexplain.out

8.40 How do I use locales under Informix?

On 6th Feb 1999 gunstho@uni-muenster.de (Dirk Gunsthoevel) wrote:-

You have to set db_locale to your locale (I assume something like FR_fr.1252 for you) BEFORE creating the database.

If you are still using isql dont create the database in it. It will NOT use the db_locale setting. Use dbaccess instead.

8.41 How do I use optimize index builds?

On 23rd Mar 1999 dua1@my-dejanews.com (Juri Dovgart) wrote:-

PDQ has impact on the index builds - when PDQPRIORITY > 0. It's called vertical parallelism. Infx perform parallel scans, sorts and uses memory, allocated for PDQ for sorts.

Here some advices about index builds :

8.42 How do I calculate how much space an index will use?

On 6th May 1999 icc@injersey.infi.net (Vic Glass) wrote:-

Formula to calculate ~ bytes that an index will use:

(sum of column sizes + 9) * rows * 1.25

For example, if there is a table:

>PRE> create table person ( person_id serial, fname char(25), lname char(25), ssn char(char9) ); create index ix1 on person(person_id); create index ix2 on person(ssn); create index ix3 on (fname, lname);

then the index space needed for 1,000,000 rows would be approximately:

ix1: (4 +9) * 1000000 * 1.25 ----> 16,250,000 bytes or 16,250 Kb
ix2: (9+9) * 1000000 * 1.25 -----> 22,500,000 bytes or 22,500 Kb
ix3: (25+25+9) * 1000000 * 1.25 ---> 73,750,000 bytes or 73,750 Kb

On 6th May 1999 mcollins@us.dhl.com (Mark Collins) wrote:-

Slight correction to the earlier answer - the formula listed is for attached indexes. If you put the indexes in their own dbspace, they are detached, and the formula changes to (sum of column sizes + 13) * rows * 1.25. The extra four bytes store the partition number of the table (or fragment) in which the indexed row is found. I'm assuming that the "1.25" is a general rule-of-thumb for estimating the overhead of non-leaf pages, but that method is only a rough approximation at best. Another thing that needs to be addressed when calculating index space is FILLFACTOR.

8.43 Why do I get error -197?

On 23rd June 1999 gdewinter@spf.fairchildsemi.com (Greg Dewinter) wrote:-

-197 ISAM error: Partition recently appended to; can't open for write or logging.

This error is generally seen after a High Performance Load in Express mode. The only way to correct this is to do a level 0 archive of the effectted dbspaces.

Express mode HPL loads data into a new extent and then appends the extent to the table when the entire load is complete. That is why it is able to load all the rows with no logging.

8.44 How do I log the queries informix receives?

On 28th June 1999 jleffler@earthlink.net (Jonathan Leffler) wrote:-

If you're on a civilized system (Unix or variants), and you're using a sufficiently civilized server (OnLine, IDS, etc) and a sufficiently recent version of ESQL/C (primarily CSDK 2.x), then you can set the SQLIDEBUG environment variable to a value such as 2:/tmp/sqli.out and the application will log all the data sent back and forth. You can then decipher the data with the sqliprint program. The actual data file will have an underscore and a process id after what you specified in SQLIDEBUG. The SQLIDEBUG environment variable has worked for quite some time (6.00?), but getting hold of sqliprint has been harder until it was distributed with CSDK. You can also use a value 1:/tmp/sqli.out to get some sort of ASCII dump, but it isn't as useful as the binary dump (IIRC; I've not used it more than once).

8.45 Any hints for running more than one onbar process at a time?

On 16th July 1999 RRABE@PROMUS.com (Rick Rabe) wrote:-

I'm not familiar with Netbackup, but use Networker on Siemens-Pyramid. My BAR_MAX_BACKUP is set to 8. One thing I have found helpful is altering bar_action, bar_instance, bar_object in the sysutils database to row-level locking instead of page-level locking.

8.46 How can I use Legato with onbar?

On 9th July 1999 sanformix@hotmail.com (Santanu Das) wrote:-

The following steps may be used to setup Legato storage manager software. For more info you may refer to Legato Installation guide.

The following is brief overview of the steps required to configure Legato for use with ON-Bar.

If you want to execute any onbar comands from the command line, Legato requires that the two environment variable, NSR_DATA_VOLUME_POOL and NSR_LOG_VOLUME_POOL, must be set to the same values specified in the client panel. If you follow the installation manual instructions, these pools will be named, DBMIData and DBMILog.

8.47 How can I update statistics in parallel?

On 28th July 1999 kagel@bloomberg.net (Art S. Kagel) wrote:-

Get my dostats.ec utility from the IIUG Software Repository and run 5-10 copies, on different tables, at once with PDQPRIORITY=100/(#copies). Here is an ksh/awk script to create the script to run all the needed dostats in parallel groups:


#! /usr/bin/ksh

if [[ $# -lt 2 ]]; then
    echo Usage: $0 database #copies 
    exit 1
fi

dbase=$1
ncopies=$2
if [[ $# -eq 3 ]]; then
    templ=$3
else
    templ='*'
fi
dbaccess $dbase - </dev/null
output to temp$$ without headings
select tabname 
from systables
where tabid > 99
  and tabname matches "$templ";
EOF

awk -v ncopies=$ncopies -v dbase=$dbase '
BEGIN { 
    cnt=0; 
    pdq=100/ncopies; 
    printf "PDQPRIORITY=%d; export PDQPRIORITY \n", pdq; 
}
{
    if (length( $1 ) == 0){ next; }
}
{
    # Every N copies of dostats insert a wait
    if ((cnt % ncopies) == 0 && cnt > 0) { print "wait"; }

    # output a dostats command for each table in the background
    printf "dostats -d %s -t %s & \n", dbase, $1, pdq;
    cnt++;
}
END {
    # Now update stats on all stored procedures.
    print "dostats -d %s -p \n", dbase;
}
' temp$$

#####  End script  #####

Then to generate a multiple dostats script, assume you named the above genstats: genstats mydatabase 5 >updstats.sh

8.48 How do I locate temporary tables?

On 16th August 1999 vardana@infogain.com (Vardan Aroustamian) wrote:-

I used to use some variations of this query to check temporary tables:


select tabname,
       case
       when bitval( p.flags, 32 ) = 1
         then 'sys_temp'
       when bitval( p.flags, 64 ) = 1
         then 'usr_temp'
       when bitval( p.flags, 128 ) = 1
         then 'sort_file'
       end type,
       hex(n.partnum) h_n_partnum,
       n.partnum n_partnum,
--     n.owner,
--     hex(p.flags) h_p_flags,
       name dbspace_name
from sysptnhdr p,
     systabnames n,
     sysdbstab d
where p.partnum = n.partnum
      and partdbsnum( n.partnum ) = d.dbsnum
      and ( bitval( p.flags, 32 ) = 1     -- System created Temp Table
          or bitval( p.flags, 64 ) = 1  -- User created Temp Table
          or bitval( p.flags, 128 ) = 1 )       -- Sort File
          );

On 17th August 1999 jakesalomon@my-deja.com (Jacob Salomon) wrote:-

After receiving your reply I got to a little experimenting. The pattern I noticed is the the 0x20 flag - bitval(p.flags, 32) - is the marker of any kind of temp table. I noticed that SORTTEMP and HASHTEMP tables have some other flags set but all of them has this one flag on.

Now my query for abuses of temp tables is:


select	t.dbsname, t.tabname,
	hex(p.partnum) partition, hex(p.flags) pflags
  from	sysmaster:systabnames t, sysmaster:sysptnhdr p
 where	t.partnum = p.partnum
   and	bitval(p.flags,32) = 1		-- Looking for temp tables
   and	trunc(p.partnum / 1048576)      -- Filter: Only temps not in
	in (select dbsnum               -- temp dbspace
              from sysdbspaces where is_temp = 0)
 order by dbsname, tabname, partition

8.49 How do I set tables memory resident?

On 17th August 1999 vardana@infogain.com (Vardan Aroustamian) wrote:-


SET TABLE your_table MEMORY_RESIDENT;
SET TABLE your_table NON_RESIDENT;

You can set resident only particular fragment of fragmented table


SET TABLE your_table ( dbspace1, dbspace2 ) MEMORY_RESIDENT;

On 27th August 2014 art.kagel@gmail.com (Art Kagel) wrote: Memory resident tables were only supported in v7.30 (though the command was still supported but no-op until v11.10).