Posts

Showing posts from August, 2012

Monitoring ActiveMQ with JMX

service:jmx:rmi:///jndi/rmi:// : /jmxrmi Java based Nagios plugin for JMX http://exchange.nagios.org/directory/Plugins/Java-Applications-and-Servers/check_jmx/details java -jar check_jmx.jar       –U $ACTIVEMQ_JMX_URL       -O "org.apache.activemq:BrokerName= ,Type=Broker"        -A         java -jar check_jmx.jar    –U $ACTIVEMQ_JMX_URL    -O org.apache.activemq:BrokerName= ,Type=Queue,Destination= "     -A  

Oracle plsql block executed in plsql

For rerunnable migration this was necessary to conditionally compile/execute plsql that had a static cursor that was referencing a table that might not exist. The static cursor was required for updating where current of. If the table didn't exist the static cursor definition wouldn't compile. DECLARE BEGIN  EXECUTE IMMEDIATE 'DECLARE     BEGIN       execute immediate ''select &7 from dual'';     END;'; END; /

Fix non-unique PK id Oracle

DECLARE   CURSOR sqlCur IS SELECT id FROM (SELECT id, count(1) cnt FROM <someTable> GROUP BY id) WHERE cnt > 1;   recId <someTable>.id%TYPE;   newId <someTable>.id%TYPE;   fixSql VARCHAR2(500);   CURSOR dupeCur (recId <someTable>.id%TYPE) IS SELECT id FROM <someTable> WHERE id=recId FOR UPDATE of id; BEGIN     DBMS_OUTPUT.PUT_LINE('<someTable> checking and fixing dupe ids before creating PK');     --update 1 record from each id with a dupe     --repeat until none remaining     < >     LOOP       OPEN sqlCur;       FETCH sqlCur INTO recId;       EXIT dupe_detect WHEN sqlCur%NOTFOUND;         < >       LOOP         SELECT <someTable>_seq.nextval INTO newId FROM dual;         FOR rec IN dupeCur(recId) LOOP           --update only the first record and exit           UPDATE <someTable> SET id = newId WHERE CURRENT OF dupeCur;           EXIT;         END LOOP;         COMMIT;             FETC

Quickly Generate and Load data in Oracle

The fastest way I've found over the years to generate and load data into Oracle, is to use sqlldr in direct mode reading from a linux pipe with all constraints disabled. Since  everything on linux is a file, any program can generate data and write it to the pipe/file. Disabling all constraints first dramatically increases the speed of loading. Indexes need to be rebuilt after direct loading since they are invalidated. Create a loader.ctl file which directs sqlldr how and where to load the data. The control file below instructs sqlldr to read the comma separated column data (CSV) from the /tmp/loadData.pipe file and write it to the table in direct mode which bypasses logging for speed. It should be noted that the data in the table is not recoverable should the database need to be restored unless you take a backup after the  load. Only the initial load is not logged. All future transaction against the data are logged and recoverable.  OPTIONS (DIRECT=TRUE)            UNRECOV