Don Seiler

Subscribe to Don Seiler feed
IT/Databases/PostgreSQL Stuff Mostly. Maybe Oracle From Time-to-Time.Don Seilernoreply@blogger.comBlogger392125
Updated: 7 hours 22 min ago

Beware (Sort-Of) Ambiguous Column Names In Sub-Selects

Mon, 2018-04-30 12:55
This morning I received an UPDATE statement from a developer that I was testing. It ran without errors but then I saw that it updated 5 rows when it should have only updated 3. The reason gave me a little shock so I whipped up a simple test-case to reproduce the problem.

First we create two tables:

CREATE TABLE foo (
    id int
    , name varchar(30)
);
CREATE TABLE
CREATE TABLE bar (
    id int
    , foo_id int
    , description varchar(100)
);
CREATE TABLE

Then we insert some data:

INSERT INTO foo (id, name) VALUES
    (1, 'Dev')
    , (2, 'QA')
    , (3, 'Preprod')
    , (4, 'Prod');
INSERT 0 4
INSERT INTO bar (id, foo_id, description)
    VALUES (1, 1, 'A')
    , (2, 2, 'B')
    , (3, 2, 'C')
    , (4, 2, 'D')
    , (5, 3, 'E');
INSERT 0 5

Here I'm using a SELECT rather than the original UPDATE just to test. This could (should) be done as a join, but I was sent something like this:

SELECT COUNT(*)
FROM bar
WHERE foo_id = (SELECT id FROM foo WHERE name='QA');
 count
-------
     3
(1 row)

Fair enough. It does the same thing as a join. However what I was sent was actually this (note the column name in the subquery):

SELECT COUNT(*)
FROM bar
WHERE foo_id = (SELECT foo_id FROM foo WHERE name='QA');
 count
-------
     5
(1 row)

I would expect an error since foo_id does not exist in table foo, like this:

SELECT foo_id FROM foo WHERE name='QA';
ERROR:  42703: column "foo_id" does not exist
LINE 1: SELECT foo_id FROM foo WHERE name='QA';

Instead, it basically selected EVERYTHING in the bar table. Why?

I posted this dilemma in the PostgreSQL Slack channel, and others were similarly surprised by this. Ryan Guill tested and confirmed the same behavior not only in Postgres, but also in Oracle, MS SQL, & MySQL.

Cindy Wise observed that it is probably using the foo_id field from the bar table in the outer query, which does make sense. It's comparing foo_id to itself (while also running the now-pointless subquery to foo table), which will of course return true, so it grabs every row in the bar table.

This seems like a very easy trap to fall into if you're not careful with your column names. Considering this was originally in the form of an UPDATE query, it can be a destructive mistake that would execute successfully and could be rather hard to trace back.
Categories: DBA Blogs

How (Not) to Change Passwords in PostgreSQL

Sat, 2018-04-14 11:53
A few months ago I was doing some testing on a development PostgreSQL database and watching the postgresql server log (akin to Oracle's alert log). I was skimming for lines relevant to my testing when I noticed something chilling:

LOG:  statement: alter user john password 'IloveDBAs';

The username and password have obviously been changed but the point is plain: PostgreSQL printed the password value in plain text to the log file. A few tests of my own confirmed that whenever the PASSWORD specification is used in a CREATE USER or ALTER USER command, it will be printed to the server log:

LOG:  statement: create user john password 'badidea';
LOG:  statement: alter user john password 'alsobad';

The way around this in psql is to use the \PASSWORD command. When creating a new user, do not specify the password at creation time, just use \PASSWORD later:

# create user susan;
CREATE ROLE
# \password susan
Enter new password:
Enter it again:
# \password john
Enter new password:
Enter it again:

Now, like most password-change processes, you're asked to enter it twice and the input is not echoed to the screen. The log will show this:

LOG:  statement: create user susan;
LOG:  statement: ALTER USER susan PASSWORD 'md5d311d09782068f6a2391358ae512e113'
LOG:  statement: ALTER USER john PASSWORD 'md5a7e4187c1c977ed2e700e880dac11f15'

You can see the passwords are still printed, but they are md5-hashed now.

However, this only works when I'm using psql. If a developer is using a GUI tool that offers a feature to change a password, they could very well be doing the same bad command under the hood.

Note: this behavior was originally observed by me in PostgreSQL 9.2 but it's still the case in 9.6 and 10.2. From a discussion that was had with the pgsql-admin community, it's not likely to be changed any time soon.
Categories: DBA Blogs

The Transition (or: How I Learned to Stop Worrying and Love PostgreSQL)

Fri, 2018-02-09 14:31
Note: Yes I re-(ab)used the title.



As some of you may know (if you follow me on twitter), after 16 years as an Oracle DBA, I made a career shift last summer. I hung up the Oracle spurs (and that sweet Oracle Ace vest) and threw on a pair of PostgreSQL chaps. I had always been a fan and very casual user of PostgreSQL for many years, even gently lobbying the folks at Pythian to add PostgreSQL DBA services to their offering (to no avail).

I'm taking this fresh start to also jump-start my blogging, hoping to write about interesting notes as I dive deeper in the PostgreSQL world (and even some Hadoop as I get into that). I'm just over 6 months into my new adventure and loving it. I'm excited to be dealing a lot more with an open source community, and interacting daily with some contributors via Slack.

Over the past 6 months, I've compiled a list of topics to write about. If I can remember the details or find my notes, I'll hopefully be writing regularly on these topics!

So, I hope to see more of you regularly here. Stay tuned!
Categories: DBA Blogs

Sending notifications from Oracle Enterprise Manager to VictorOps

Thu, 2016-01-28 11:55
We use VictorOps for our paging/notification system, and we're pretty happy with it so far. On the DBA team, we've just been using a simple email gateway to send notifications from Oracle Enterprise Manager (EM) to VictorOps. Even then, we can only send the initial notification and not really send an automated recovery without more hacking than its worth. Not a big deal, but would be nice to have some more functionality.

So yesterday I decided I'd just sort it all out since VictorOps has a nice REST API and Enterprise Manager has a nice OS script notification method framework. The initial result can be found on my github: entmgr_to_victorops.sh.

It doesn't do anything fancy, but will handle the messages sent by your notification rules and pass them on to VictorOps. It keys on the incident ID to track which events it is sending follow-up (ie RECOVERY) messages for.

Please do let me know if you have any bugs, requests, suggestions for it.

Many thanks to Sentry Data Systems (my employer) for allowing me to share this code. It isn't mind-blowing stuff but should save you a few hours of banging your head against a wall.
Categories: DBA Blogs

Just XFS Things

Tue, 2015-11-10 16:03
$ uptime

16:36:42 up 4 days, 12:28, 6 users, load average: 1029.20, 995.42, 865.77


See https://www.centos.org/forums/viewtopic.php?f=47&t=52412 and https://access.redhat.com/solutions/532663 (requires RedHat subscription) for details. And defrag your XFS volumes.
Categories: DBA Blogs

The Leap Second is No Laughing Matter (if you have java on an older Linux kernel)

Tue, 2015-07-21 16:50
Earlier this month we began getting frequent email warnings from our EM12c server that some agents were experiencing read time outs. Then we saw that the emagent java process was using A LOT of CPU, regularly around 500% but sometimes as high as 800% as seen from "top". Restarting the agent did nothing.

I opened an SR with Oracle Support, where I was first instructed to apply a JDBC patch and then a PSU agent patch. No change in behavior.

Courtney Llamas from the Oracle EM team reached out and suggested it might be due to the leap second, directing me to these MOS docs:

  • Enterprise Manager Management Agent or OMS CPU Use Is Excessive near Leap Second Additions on Linux (Doc ID 1472651.1)
  • Leap Second Hang - CPU Can Be Seen at 100% (Doc ID 1472421.1)
The workaround is to restart ntpd (or reboot the server):

# /etc/init.d/ntpd stop
# date -s "`date`" (reset the system clock)
# /etc/init.d/ntpd start


I monitored top while my system admin restarted ntpd and reset the clock. As soon as he did, java CPU usage dropped like a rock.



While I'm incredibly grateful that Courtney provided the solution in basically 5 minutes, I'm even more upset that Oracle Support had me doing everything but for the 20 days that my original SR has been open.

Of course the real joke is on me, since I first reported the error on July 1 and we all joked on twitter how it was probably due the leap second. The fault also lies with me since I failed to notice that our kernel version (2.6.32-220) was still vulnerable to this (fixed in 2.6.32-279). See Maris Elsins' great write-up (which I apparently skimmed too lightly).
Categories: DBA Blogs

Upgrade to Oracle 12c, Get the Huge Trace Files for Free!

Tue, 2015-07-21 12:12
Last week we began testing a copy of our production database on Oracle 12c (12.1.0.2). This past weekend we were alerted that the disk holding our ADR diagnostic directory was near full. We noticed some pretty big (and recent) trace files there. Then it happened twice more. This last time filled the disk before we could get to it (thankfully only dev, and during the evening), meaning it filled up fast. The largest file was over 18GB and it only took 6 hours to get that big.

I saved the three biggest trace files to a large NFS mount and did a trace file purge just to get our dev database back up. When I looked at those files, I saw they were all sqlplus sessions running the same DELETE statement after reports like this:


----- Cursor Obsoletion Dump sql_id=7q0kj0sp5k779 -----
Parent cursor obsoleted 1 time(s). maxchild=1024 basephd=0xae2a2ca308 phd=0xae2a2ca308
----- Dump Cursor sql_id=7q0kj0sp5k779 xsc=0x7ffbf191fd50 cur=0x7ffbf2702670 -----


At first I thought it was a user session setting some oddball trace event. However our team found DocID 1955319.1 (Huge Trace Files Created Containing "----- Cursor Obsoletion Dump sql_id=%s -----")

Long story short, it's an unpublished bug introduced in 12.1.0.2 with the cursor obsoletion diagnostic dump "ehancement". I don't think they intended this though, even though they did say "Huge". The workaround is to disable it completely, via this hidden parameter:


alter system set "_kks_obsolete_dump_threshold" = 0;


There is this note at the end though:
Note: The underlying cursor sharing problem should always be highlighted and investigated to ensure that the reason for the non-sharing is known and fully understood.Which is definitely good advice.
Categories: DBA Blogs

Returning Error Codes from sqlplus to Shell Scripts

Tue, 2015-01-06 15:32
When I have to run SQL scripts, I prefer to do it via executable ksh scripts (I use ksh because it's usually an Oracle pre-requisite so I can rely on it being installed). One thing that I've just recently started adding into my scripts is exiting sqlplus on error and returning the ORA error code. This is done via the WHENEVER SQLERROR feature of sqlplus:

sqlplus / as sysdba <<EOF

        whenever sqlerror exit sql.sqlcode
        alter tablespace foo
                rename to foo_old;

        create tablespace foo
                datafile size 100m;

        alter table foo move tablespace $TABLESPACE_NAME nocompress;
EOF

RETURN_CODE=$?
if [ $RETURN_CODE -ne 0 ]; then
        echo "*** Tablespace renaming error code $RETURN_CODE. ***"
        exit $RETURN_CODE;
fi

In this example we rename a tablespace and then create a new tablespace in its place and move a table there. It seems rather contrived but this is actually what I'm doing to move tables to an uncompressed and unencrypted tablespace to test storage vendor deduplication claims. But I digress ...

To test this, I used a tablespace that doesn't exist. This results in an ORA-00959 error when the tablespace does not exist. However, the return code I get was 191. I spent a good portion of the day testing and retesting with all sorts of debug output until I stumbled across this comment. Since Linux (and I'm told Unix) return codes only go up to 255, the ORA sqlcode value of 959 is wrapped until there is a remainder. The value of 191 is the difference, achieved simply by modulo operation:

959 % 256 = 191

 And suddenly the skies were cleared.

As always, hope this helps!
Categories: DBA Blogs

Making Copies of Copies with Oracle RMAN

Wed, 2014-11-12 10:13
I recently had need to make a copy of an image copy in Oracle rman. Since it wasn't immediately obvious to me, I thought it was worth sharing once I had it sorted out. I was familiar with making a backup of a backup, but had never thought about making a copy of a copy.

First you need to create an image copy of your database or tablespace. For the sake of example, I'll make a copy of the FOO tablespace. The key is to assign a tag to it that you can use for later reference. I'll use the tag "DTSCOPYTEST":

backup as copy 
    tablespace foo 
    tag 'DTSCOPYTEST'
    format '+DG1';

So I have my image copy in the DG1 tablespace. Now say we want to make copy of that for some testing purpose and put it in a different diskgroup. For that, we need the "BACKUP AS COPY COPY" command, and we'll want to specify the copy we just took by using the tag that was used:

backup as copy
    copy of tablespace foo
    from tag 'DTSCOPYTEST'
    tag 'DTSCOPYTEST2'
    format '+DG2';

As you'd guess, RMAN makes a copy of the first copy, writing it to the specified format location.

As always, hope this helps!
Categories: DBA Blogs

The Importance of Backups (A Cautionary Block Recovery Tale)

Tue, 2014-11-04 22:28
Just wanted to share a quick story with everyone. As I was in the airport waiting to fly to Oracle OpenWorld this year, I noticed a flurry of emails indicating that part of our storage infrastructure for our standby production database had failed. Long story short, my co-workers did a brilliant job of stabilizing things and keeping recovery working. However, we ended up with more than a few block corruptions.

Using the RMAN command "validate database", we could then see the list of corrupt blocks in the v$database_block_corruption view. All that was needed was to run "recover corruption list" in RMAN, which will dig into datafile copies and backups to do what it can to repair or replace the corrupt blocks and then recover the datafiles. Of course, nothing is ever that easy for us!

The storage we were writing our weekly backups to had been having problems and the latest weekly had failed. We ended up having to back 2 weeks into backups to get the datafile blocks and archivelogs to eventually complete the corruption recovery. I also immediately moved our backups to more reliable storage as well so that we're never in the situation of wondering whether or not we have the backups we need.

So, triple-check your backup plan, validate your backups and TEST RECOVERY SCENARIOS! You can't say your backups are valid until you use them to perform a restore/recovery, and you don't want to find out the hard way that you forgot something.
Categories: DBA Blogs

ORA-16534 When Converting to/from Snapshot Standby with DataGuard Broker

Wed, 2014-10-22 23:00
We here at Seilerwerks Industries (not really) have been using snapshot standby databases to refresh an array of unit test databases from a common primary. During the business day, these would be converted to snapshot standby databases for testing, then overnight they are converted back to physical standby and recovered up to the master again.

However we ran into one problem the other week. I noticed that the test3 database was still in physical standby mode well into the business day. Trying to manually convert returned this error:

DGMGRL> convert database test3 to snapshot standby
Converting database "test3" to a Snapshot Standby database, please wait...
Error:
ORA-16534: switchover, failover or convert operation in progress
ORA-06512: at "SYS.DBMS_DRS", line 157
ORA-06512: at line 1

A quick search of MOS yielded bug 13716797 (ORA-16534 from the broker when setting apply-off), which simply suggested restarting the problem database when encountering that error. However doing so did not get me any further. That's when the I checked the Data Guard Broker configuration:

DGMGRL> show configuration;

Configuration - testdb

  Protection Mode: MaxPerformance
  Databases:
    test1 - Primary database
    test5 - Physical standby database
    test6 - Snapshot standby database
    test3 - Physical standby database
    test4 - Snapshot standby database

Fast-Start Failover: DISABLED

Configuration Status:
ORA-16610: command "CONVERT DATABASE test6" in progress
DGM-17017: unable to determine configuration status

Looks like I have two databases stuck in physical standby mode, test3 and also test6. And the configuration is specifically complaining about test6. So I restarted that database and, sure enough, I was then able to convert both back to snapshots:

DGMGRL> show configuration;

Configuration - testdb

  Protection Mode: MaxPerformance
  Databases:
    test1 - Primary database
    test5 - Snapshot standby database
    test6 - Snapshot standby database
    test3 - Snapshot standby database
    test4 - Snapshot standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

It was very interesting to me to see one member of the Data Guard configuration prevent me from performing an operation on a different member. Hopefully this helps one of you in the future.

Categories: DBA Blogs

Advanced Queue Quickie: Errors and Privileges

Wed, 2014-10-01 23:07
File this one under the misleading-errors department. One of my developers was working with a new queue. He pinged me when he got this error trying to create a job that used the queue:

ERROR at line 1:
ORA-27373: unknown or illegal event source queue
ORA-06512: at "SYS.DBMS_ISCHED", line 124
ORA-06512: at "SYS.DBMS_SCHEDULER", line 314
ORA-06512: at line 2

The CREATE_JOB statement was:

BEGIN
DBMS_SCHEDULER.CREATE_JOB(
job_name => 'foo.bar_q_job',
job_type => 'PLSQL_BLOCK',
job_action => 'begin foo.bar_pkg.consume_bar_queue(); end;',
queue_spec => 'BAR.BAR_Q, FOO_BAR_AGENT',
enabled => true,
comments => 'This is a job to consume the bar.bar_q entries that affect foo.');
END;
/

After a few minutes of banging our heads, it became obvious that this was a permissions problem. The queue was owned by BAR, the job was being created as FOO. The ORA error message could/should have made this more obvious, in my opinion.

Anyway, the fix was simply to grant access to FOO:

DBMS_AQADM.GRANT_QUEUE_PRIVILEGE(
        privilege  => 'ALL',
        queue_name => 'bar.bar_q',
        grantee    => 'foo' );

Hope this saves some banged heads for others.
Categories: DBA Blogs

NOFILENAMECHECK Parameter Causes DUPLICATE DATABASE To Ignore DB_CREATE_FILE_DEST?

Wed, 2014-09-10 23:06
Last week I was creating a new testing database from a backup of our demo database, both under Oracle 11.2.0.3. I grabbed one of my old scripts to handle the duplicate function, which looked similar to this:

connect auxiliary /;
run {

        duplicate database to testdb
                backup location '$BACKUPDIR'
                nofilenamecheck;

}

One important difference between the demo database and this new test database is that the original demo database lives on a filesystem and the new database was to go onto ASM on a different host. I had copied the syntax from the old script and kicked it off. I made sure that the db_create_file_dest was set to the ASM diskgroup. However the restore would fail as RMAN tried to write to the filesystem path used by the original demo database, which didn't exist on this host, instead of the ASM diskgroup.



Definitely puzzling to me. I double-checked the documentation for NOFILENAMECHECK, which only suggested that it did a check for matching filenames, but didn't state that it would cause the issue I was seeing. The summary saying that it would prevent RMAN from checking for a name collision, which it does to prevent from overwriting existing files on the same host.

However what I found is that having NOFILENAMECHECK in my command resulted in RMAN restoring the files to the original path, ignoring my db_create_file_dest specifications. When I removed the NOFILENAMECHECK specification from the RMAN command, the files were restored to the ASM diskgroup as intended.

MOS Support documents 1375864.1 and 874352.1 suggest using the DB_FILE_NAME_CONVERT parameter but I found this was not necessary when I set the DB_FILE_CREATE_DEST parameter, as long as I didn't use NOFILENAMECHECK. I couldn't find anything in MOS about NOFILENAMECHECK forcing the restore to use a certain location though.
Categories: DBA Blogs

ORA-14048 When Adding Composite Partition

Thu, 2014-08-21 15:35
Fresh off the heels of my earlier composite partitioning post, I just ran into this confusing issue:

SQL> alter table p_objects
  2          add partition p201410
  3                  values less than (to_date('2014/11/01','yyyy/mm/dd'))
  4          (
  5                  subpartition p201410_spdts values ('DTS')
  6                  , subpartition p201410_spfoo values ('FOO')
  7                  , subpartition p201410_spbar values ('BAR')
  8                  , subpartition p201410_spsys values ('SYS')
  9                  , subpartition p201410_spsysaux values ('SYSAUX')
 10          )
 11  tablespace tbs1
 12  ;
tablespace tbs1
*
ERROR at line 11:
ORA-14048: a partition maintenance operation may not be combined with other
operations

The error doesn't quite make it easy to determine what the problem is. Turns out that the tablespace/storage clause of the ALTER TABLE ... ADD PARTITION has to come prior to the subpartitions definition. Simply moving that part of the statement a few rows up yields success:

SQL> alter table p_objects
  2          add partition p201410
  3                  values less than (to_date('2014/11/01','yyyy/mm/dd'))
  4          tablespace tbs1
  5          (
  6                  subpartition p201410_spdts values ('DTS')
  7                  , subpartition p201410_spfoo values ('FOO')
  8                  , subpartition p201410_spbar values ('BAR')
  9                  , subpartition p201410_spsys values ('SYS')
 10                  , subpartition p201410_spsysaux values ('SYSAUX')
 11          )
 12  ;

Table altered.

Again, probably obvious to most of you. It wasn't as obvious to decipher when I was trying to add a partition with over 6,300 subpartitions. As always:


Categories: DBA Blogs

Adding New Partitions with Custom Subpartition Definition (Range-List)

Thu, 2014-08-21 14:28
As part of a project for work I wanted to create a script that would create a new range partition but also pre-create all the list subpartitions. By default the subpartitions would be created based on the subpartition template. However for various reasons which I won't get into we don't update or use the subpartition template. I wanted to define the subpartition list as part of the ALTER TABLE ... ADD PARTITION statement. I assumed it was perfectly acceptable but didn't see any obvious examples in my hasty web searching, so I thought I'd share on myself.

I start by creating my composite range-list partitioned table:

SQL> create table p_objects
  2  tablespace tbs1
  3  partition by range(rdate)
  4  subpartition by list(owner)
  5  subpartition template
  6  (
  7          subpartition spsys values ('SYS')
  8          , subpartition spsysaux values ('SYSAUX')
  9  )
 10  (
 11          partition p201301 values less than (to_date('2013/02/01','YYYY/MM/DD')),
 12          partition p201302 values less than (to_date('2013/03/01','YYYY/MM/DD')),
 13          partition p201303 values less than (to_date('2013/04/01','YYYY/MM/DD')),
 14          partition p201304 values less than (to_date('2013/05/01','YYYY/MM/DD')),
 15          partition p201305 values less than (to_date('2013/06/01','YYYY/MM/DD')),
 16          partition p201306 values less than (to_date('2013/07/01','YYYY/MM/DD')),
 17          partition p201307 values less than (to_date('2013/08/01','YYYY/MM/DD')),
 18          partition p201308 values less than (to_date('2013/09/01','YYYY/MM/DD')),
 19          partition p201309 values less than (to_date('2013/10/01','YYYY/MM/DD')),
 20          partition p201310 values less than (to_date('2013/11/01','YYYY/MM/DD')),
 21          partition p201311 values less than (to_date('2013/12/01','YYYY/MM/DD')),
 22          partition p201312 values less than (to_date('2014/01/01','YYYY/MM/DD'))
 23  )
 24  as select object_id
 25          , owner
 26          , object_name
 27          , object_type
 28          , to_date(trunc(dbms_random.value(
 29                  to_char(to_date('2013/01/01','YYYY/MM/DD'),'J'),
 30                  to_char(to_date('2013/12/31','YYYY/MM/DD'),'J')
 31                  )),'J') rdate
 32  from all_objects
 33  where owner in ('SYS','SYSAUX');

Table created.

This creates the partitions with 2 subpartitions each, per my defined template. For example:

PARTITION_NAME                 SUBPARTITION_NAME
------------------------------ ------------------------------
P201301                        P201301_SPSYS
                               P201301_SPSYSAUX
P201302                        P201302_SPSYS
                               P201302_SPSYSAUX
P201303                        P201303_SPSYS
                               P201303_SPSYSAUX

Next I'll add two more partitions, one with no subpartition definition specified and another with a custom definition:

SQL> alter table p_objects
  2          add partition p201408
  3                  values less than (to_date('2014/09/01','yyyy/mm/dd'));

Table altered.

SQL> alter table p_objects
  2          add partition p201409
  3                  values less than (to_date('2014/10/01','yyyy/mm/dd'))
  4          (
  5                  subpartition p201409_spdts values ('DTS')
  6                  , subpartition p201409_spsys values ('SYS')
  7                  , subpartition p201409_spsysaux values ('SYSAUX')
  8          )
  9  ;

Table altered.

The results:
PARTITION_NAME                 SUBPARTITION_NAME
------------------------------ ------------------------------
P201408                        P201408_SPSYS
                               P201408_SPSYSAUX
P201409                        P201409_SPDTS
                               P201409_SPSYS
                               P201409_SPSYSAUX

You can see that the first statement just used the subpartition template as defined in the original CREATE TABLE statement. The second uses the list I defined in the ALTER TABLE statement. I could have only defined the SPDTS subpartition and it would have only used that one and not used SYS or SYSAUX subpartitions at all.

Hopefully this post helps a few of you in your searching when faced with a similar task. This was probably obvious to most of you but the mind starts to slip for some of us.
Categories: DBA Blogs

ASM Startup Fails With ORA-04031 After Adding CPUs

Sun, 2014-07-13 15:11
A few weeks ago we upgraded one of our production server, adding another CPU tray. This brought the number of CPU cores from 80 to 160, and took us from 2Tb of RAM to 4Tb (just in time for Oracle to announce the in-memory database in 12.1.0.2!).

However when I went to start things up, ASM wasn't starting up, giving me these errors:

ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","unknown object","KKSSP^550","kglseshtSegs")
ORA-04031: unable to allocate 392 bytes of shared memory ("shared pool","unknown object","sga heap(1,1)","kdlwss")

ORA-04031: unable to allocate 560 bytes of shared memory ("shared pool","unknown object","sga heap(1,1)","KKSSP")
RBAL (ospid: 59427): terminating the instance due to error 4031




A quick search of MOS turned up this gem:

Unable To Start ASM (ORA-00838 ORA-04031) On 11.2.0.3/11.2.0.4 If OS CPUs # > 64. (Doc ID 1416083.1), with this cause:
In 11.2.0.3/11.2.0.4, the "PROCESSES" parameter will be default to "available CPU cores * 80 + 40" (in the ASM spfile). As the default value for "MEMORY_TARGET" is based on "PROCESSES", it can be insufficient if there is a large number of CPU cores or large number of diskgroups which could cause issues (i.e. Grid Infrastructure stack fails to stop with ORA-04031 etc) per Bug:13605735 & Bug:12885278, it is recommended to increase the value of MEMORY_MAX_TARGET & MEMORY_TARGET before upgrading/installing to 11.2.0.3/11.2.0.4 (does not apply to 10g ASM).
We followed the workaround of increasing the ASM memory_target (and memory_max_size) to 4Gb (from 500Mb) and things started up just fine.

We've since actually increased the memory even more in ASM to see even better performance but I haven't taken the time yet to precisely understand where that is coming from. I'll be sure to write another post to detail that.

For now we're definitely enjoying the bigger shared pool and buffer caches in 11.2. We're definitely excited to dig into the in-memory options in the upcoming 12.1.0.2.
Categories: DBA Blogs

Beware April 2014 PSU and Golden Gate Integrated Capture Users

Tue, 2014-06-03 14:55
When the latest quarterly Patch Set Update (PSU) came out from Oracle, we planned to get it in place as soon as reasonable due to our need to stay current with security patches, and my need to apply what I had assumed were well-tested bug fixes for our 11.2.0.3 installations. However we were in for an unpleasant surprise.

We first applied the April 2014 PSU on our staging & development database hosts and things ran fine. After two weeks, we put it into production. The next morning we noticed that our Golden Gate extracts were abending with a message like this:

2014-05-13 01:41:44 ERROR OGG-02077 Extract encountered a read error in the asynchronous reader thread and is abending: Error code 600, error message:
ORA-00600: internal error code, arguments: [knlogcPackColList:101], [1], [], [], [],[], [], [], [], [], [], []. 

Obviously the PSU was the only change from the day before when these GoldenGate extracts ran fine. The error itself seemed to match Bug 16306373 (OGG Lightweight capture fails with ORA-600 [knlogcPackColList:101]), which affects integrated capture extracts, which was what we were trying to use.

So we had two questions to answer:

  1. Was it really the PSU, and if so do we need to rollback?
  2. Why didn't we see this in development/staging?




The answer to #2 came pretty quickly: the extracts had been stopped prior to applying the PSU in development and were never restarted. Our use of GoldenGate is for a new project that is still not quite solid in form, and so the extracts might be shut down for weeks at a time. However the DBA team was not aware of this (something that will be added to the checklist in the future) and so that part was never tested. We decided to turn the extracts on and, sure enough, we saw the problem within 30 minutes.

As far as #1, we opened an SR with Oracle Support. At first they assured us it wasn't the PSU, but within a few hours had started backing away from that confidence and by the end of the day were suggesting we roll back. However we decided to hold off on that due to the luxury of not needing those extracts to run quite yet (we had just been capturing data in production to make sure we could handle the workload). That gave us and Oracle Support some breathing room to get it fixed.

Eventually Oracle Support did confirm that it was Bug 16306373. The patch for this bug, however, conflicted with another patch we had installed. So they created a new merge patch MLR 18689166 for us. This was applied that evening to just our development database home, and immediately we saw more problems.

First, regularly-scheduled materialized view refreshes were failing with:

ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_SNAPSHOT" 

Then we saw DataPump exports failing with:

ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_LOGREP_UTIL" 

On top of that, GoldenGate gave us these errors trying to unregister the extract:

GGSCI (stagingdb) 14> unregister extract ext1ol database
ERROR: Cannot register or unregister EXTRACT EXTFOO because of the following SQL error: OCI Error ORA-01775: looping chain of synonyms (status = 1775). 

It definitely looked like something in that patch had hosed the catalog. Oracle quickly found the problem:

The problem with the invalid dictionary objects after the installation of patch 18689166 is due to the execution of script e1102000.sql that is being called from postinstall.sql e1102000.sql is a downgrade scripts and removes some objects from the data dictionary resulting in other dependent objects to become invalid. 
It is here that I should point out that Oracle had this merge patch in their QA for over 4 days. I could have seriously hurt myself from simultaneously rolling my eyes and face-palming as violently as I did.

The fix was to restart the instance into UPGRADE mode and run the catupgrd.sql script from $ORACLE_HOME/rdbms/admin, which rebuilds the catalog. We confirmed that this did fix the problem in DEV, and proceeded to patch the other databases, this time commenting out the call to e1102000.sql n the postinstall.sql script, per Oracle's workaround.

So, not only did the April 2014 PSU introduce a bug, but then the patch for that bug made things incredibly worse. We were fortunate in that we did not yet need the GoldenGate extracts in production. However many others will not have that luxury and will want to double-check that bug and the PSU if they're using GoldenGate and integrated capture.
Categories: DBA Blogs

archive_lag_target Works in SE

Mon, 2014-05-12 15:34
TL;DR: The archive_lag_target parameter will force log archiving in Standard Edition.

Just a quick note here that I wanted to share since I didn't see anything directly confirming this when I was searching around.

I have an Oracle 11gR2 Standard Edition (SE) database that I'm also maintaining a manual standby for, since Oracle Data Guard is not available in SE. I created a metric extension in EM12c to alert me if the standby is more than 1 hour behind the primary. However since this is a very low-activity database, archive logs were not switching even once an hour. Obviously, I could include a command to force a log switch/archive in the script that I use to push archivelogs to the standby. However we all know that with Data Guard on Enterprise Edition (EE), one would use the archive_lag_target initialization parameter to set the desired maximum standby lag. Oracle enforces this by performing a log switch at most every X seconds, where X is the number specified by the archive_lag_target value. By default this is set to 0, which disables the feature.

I had assumed that archive_lag_target would only work in EE but decided to give it a try and was pleasantly surprised to see that it does work as intended in SE. So I can set archive_lag_target=900 to specify a 15 minute maximum log archiving (it would be more frequent if the database activity warranted an earlier switch).
Categories: DBA Blogs

Don't Fear the EM12c Metric Extensions

Fri, 2014-04-25 11:24
A few weeks ago, our customer support team asked us to automate part of their checklist that looks at the number of active sessions in our production database. In EM12c, this seemed like a no-brainer with the Average Active Sessions metric. So I added this to my production incident ruleset and went back to another project. Over the next few days we'd get pinged by EM12c but the support folks would say it shouldn't. After taking a look we realized that we should be looking at USER sessions, excluding the BACKGROUND sessions Oracle creates to run the instance (like DBWn and LGWR).

The trouble was that I couldn't find a metric for just the user sessions. I had resolved myself to having a scheduler job or script run the SQL that I wanted and send an email if it was over my critical threshold. On a whim I put a message out to twitter, and thankfully my friend Leighton answered. He suggested I look at adding a metric extension, something still foreign to me. I had seen the term in the EM12c interface but it sounded like some kind of plugin interface. Turns out it's simply another way of saying "user-defined metrics". Honestly, if they were labelled that way I would have started playing with them much sooner (subtle criticism).

So a quick search turned up a great video that showed just how simple it was to create a metric extension based on a SQL query. In just a few minutes I had the metric extension created, tested and published. You can create many different types of metric extensions all target types, but in my case a simple SQL query for a database instance was all I needed:

select count(*) from v$session
where type='USER' and status='ACTIVE';

I then define the warning and critical thresholds for the count and it's done! I added it to my ruleset (removing the stock Average Active Sessions metric) and haven't looked back.

Since then I've created a few other metric extensions, for example a standby lag check on a standard edition physical standby. My only regret is not taking the time to learn about these sooner. I suggest anyone using EM12c do so sooner rather than later.
Categories: DBA Blogs

Supplemental Logging and Securefiles Causing DBWn to Block

Thu, 2014-04-17 09:00
A few weeks back, we began making changes to prepare for using Oracle Golden Gate. One of the first steps required is to enable "minimal supplemental logging" at the database level. We did this during an evening maintenance window. However by the time the morning workload picked up, we started seeing a lot of sessions blocking, and the root blocker was one of the DB Writer (DBWn) processes.

Looking at the blocked sessions, a query similar to this was a common theme:

UPDATE foo
SET foo_data = :data, foo_time = systimestamp
WHERE foo_id = :id

This statement was run by many sessions from our webservers as part of a page load process. Very high-frequency call rate. We knew that the only change in the database was the minimal supplemental logging. Obviously we were preparing to turn it off, but took some time to look into it. It is important to note that for the most part the contention was only with sessions running this update statement.



Looking at the table involved, one unique feature was that foo_data field is an encrypted CLOB, which Oracle refers to as a securefile in 11g.

SQL> desc foo
Name            Null?    Type
--------------- -------- ----------------------------
FOO_ID          NOT NULL VARCHAR2(64)
FOO_DATA                 CLOB ENCRYPT
FOO_TIME        NOT NULL TIMESTAMP(6)

We opened an SR with Oracle and they pointed us to this unpublished bug:


Bug 9351684 : SECUREFILE - CACHE NOLOGGING CAUSES HIGH WRITE COMPLETE WAITS

They described it for us as:

Confirmed as "not a bug" in this bug.

It was stated: "write complete waits in this case are unfortunately, expected and can not be avoided/tuned. Even for NOLOGGING case there is a short invalidation redo that must be generated, and for correct crash recovery, dbwr must wait for redo to be written to disk first before data blocks can be written."Basically telling us that this is working as intended and there is no workaround if you're using securefiles. For us it was important that we move forward with Golden Gate, so we would need to have a solution that let us keep minimal supplemental logging on. Looking closer, we knew that this table was on an encrypted tablespace already, so we felt comfortable changing the table so that it used a regular "basicfile" CLOB:

SQL> desc foo
Name            Null?    Type
--------------- -------- ----------------------------
FOO_ID          NOT NULL VARCHAR2(64)
FOO_DATA                 CLOB
FOO_TIME        NOT NULL TIMESTAMP(6)


Since making this change, the problems have gone away. Obviously we were lucky in that we could change the table to not use securefiles. If you have a table that sees a lot of DML with securefiles, you're probably going to have a painful experience with supplemental logging. Beware!
Categories: DBA Blogs

Pages