Oracle 12c Cloud Control Agent new OMS repository

Reading Time: 2 minutes

After working number of hours I found out that 12c does not support pointing existing agent to new OMS repository…really Sad, unfortunately there is no other way other than un-install and re-install agent…

See note from oracle below:

EM 12c : How to Point Existing 12c Agent to New OMS ? => Change Location of OMS and Repository of 12c Agent [ID 1490457.1] To BottomTo Bottom
Modified:Sep 24, 2012Type:HOWTOStatus:PUBLISHEDPriority:3
There are no commentsComments (0) Rate this document Email link to this document Open document in new window Printable Page

In this Document
Goal
Fix
References

Applies to:
Enterprise Manager Base Platform – Version 12.1.0.1.0 and later
Information in this document applies to any platform.
Goal

How to point existing 12c Agent to new 12c OMS ? Is it possible to change the location of the OMS/Repository of a 12c Agent ?
Fix

Reconfiguring a 12c Agent from one 12c OMS to an another 12c OMS, without a re-installation of the Agent, is currently not supported.

This is not possible to change the OMS/Repository location properties on 12c Agent’s side from one OMS/Repository to another new OMS/Repository,

like it was possible in EM 10g and EM 11g.

When a 12c Agent is re-configured to a new 12c OMS, OMS will not accept this Agent or its targets for monitoring.

In previous Enterprise Manager releases (10g and 11g), the targets discovery is initiated by the Agent.

The targets are discovered and first updated in the /sysman/emd/targets.xml file and then pushed to the OMS.

But in EM 12c, OMS is the source of truth. That means that the targets registration happens first at the OMS side and then the targets are pushed to the Agent in targets.xml file.

Hence, it is required to re-install 12c Agent from the new OMS when it is needed that an agent points to a new OMS.

There is an Enhancemenet request logged :

BUG:14532300 POINTING AGENT TO NEW 12C OMS

Note : Refer that document to help you to push/install an Agent from the EM 12c console :

Document:1360183.1 – Cloud Control Agent 12c Installation – How to install Enterprise Manager Cloud Control 12.1 Agent

References
BUG:14532300 – POINTING AGENT TO NEW 12C OMS
NOTE:1360183.1 – Cloud Control Agent 12c Installation – How to install Enterprise Manager Cloud Control 12.1 Agent From the EM 12c Console ?

Hadoop cluster deployment

Reading Time: < 1 minute

I have successfully created multiple Hadoop clusters, the biggest hurdle i have ran into is documentation.

Document either missing key steps or due to environment differences documentation does not apply. following is list of clusters i have created:

Hadoop Cloudera single node Master/Datanode
Apache Hadoop manual install by downloading pkg’s
Apache Hadoop CDH4 3 node cluster.
Apache Hadoop CDH4 7 node cluster.

I’d like to hear what other people have to say about their experience.

SSD test b/w Samsung and Patriot 128GB

Reading Time: < 1 minute

Speed test b/w Samsung and Patriot 128GB:

System 1 : Patriot 128GB:

 

[root@hws03 SSD1]#  hdparm -Tt /dev/sdd

/dev/sdd:
 Timing cached reads:   14160 MB in  2.00 seconds = 7083.45 MB/sec
 Timing buffered disk reads:  844 MB in  3.00 seconds = 281.25 MB/sec
[root@hws03 SSD1]#

[root@hws03 SSD1]# dd if=/dev/zero of=/SSD1/ssdtest bs=512k count=1k
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 0.517184 seconds, 1.0 GB/s
[root@hws03 SSD1]#

System 2 : Samsung 128GB:

 

[root@hws04 SSD1]# hdparm -Tt /dev/sda

/dev/sda:
 Timing cached reads:   13644 MB in  2.00 seconds = 6826.51 MB/sec
 Timing buffered disk reads:  476 MB in  3.00 seconds = 158.63 MB/sec

[root@hws04 SSD1]# dd if=/dev/zero of=/SSD1/ssdtest bs=512k count=1k
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 0.592914 seconds, 905 MB/s
[root@hws04 SSD1]#

 

 

Using Oracle Flash back to find data

Reading Time: < 1 minute

Here are few SQL statemnets that can be used to lookup data using Oracle table version by timestap:

–select for table with version

select
* from PSOPRDEFN as of timestamp TO_TIMESTAMP(‘2009-05-19 21:24:02’, ‘YYYY-MM-DD HH24:MI:SS’)

SELECT versions_startscn, versions_starttime,
versions_endscn, versions_endtime,
versions_xid, versions_operation,
oprid ,VERSION,OPRDEFNDESC
from PSOPRDEFN
VERSIONS BETWEEN TIMESTAMP TO_TIMESTAMP(‘2009-05-19 20:00:08’, ‘YYYY-MM-DD HH24:MI:SS’)
AND TO_TIMESTAMP(‘2009-05-19 21:30:00’, ‘YYYY-MM-DD HH24:MI:SS’)
where oprid=’XXXX’;

select *
from dba_audit_trail
where timestamp between
TO_DATE(’05/19/2009:20:50:00′, ‘MM/DD/YYYY:HH24:MI:ss’) AND
TO_DATE(’05/19/2009:21:25:00′, ‘MM/DD/YYYY:HH24:MI:ss’)
and os_username not in (‘psoft’, ‘root’)
and username ‘xxxx’ –Users that you don’t want to show on report.
and action_name ‘LOGOFF’ — same here where ACTION is != to ‘LOGOFF’.
order by timestamp

Extented Audit Trails:

select * From DBA_COMMON_AUDIT_TRAIL where extended_timestamp between
TO_DATE(’07/29/2009:16:35:00′, ‘MM/DD/YYYY:HH24:MI:ss’) AND
TO_DATE(’07/29/2009:16:35:49′, ‘MM/DD/YYYY:HH24:MI:ss’)
order by extended_timestamp desc

for your refrence pleasure: http://www.petefinnigan.com/papers/audit.sql

Steps to move OMS agent to new OMS repository

Reading Time: < 1 minute

Steps below can be used to move OMS Agent from OLD to NEW OMS repository w/o un-installing and re-installing…

On Agent machine/box:

Stop agent: emctl stop agent
Modify emd.properties : Change REPOSITORY_URL and emdWallSrcUrl to new OMS repository Cleanup following files and dir: under $ORACLE_HOME/sysman/emd/upload and $ORACLE_HOME/sysman/emd/state
$ORACLE_HOME/sysman/emd/lastupld.xml
$ORACLE_HOME/sysman/emd/agntstmp.txt
$ORACLE_HOME/sysman/emd/protocol.ini

On OLD OMS database repository find agent that needs to be removed and use package below to remove/clean it up:

SQL> select target_name from mgmt_targets where target_name like
SQL> '%app%';

TARGET_NAME
--------------------------------------------------------------------------------
vwappp005.Shared
vwappp006.Shared
vwappp014.Shared
vwappt001.Shared
vxappp003
vwappp005.Shared:3872
wappp006.Shared:3872
wappp014.Shared:1830
wappp014.Shared:3872
wappt001.Shared:3872
xappp003:3872

11 rows selected.

SQL> exec mgmt_admin.cleanup_agent('vwappt001.Shared:3872');

PL/SQL procedure successfully completed.

On Agent box:

Clear state agent: emctl clearstate
agent Secure agent: emctl secure agent
Start agent: emctl start agent
Upload agent : emctl upload agent

Once, all above steps are done agent should appear on new OMS repository..

RMAN backup using TAG and copying file between ASM to ASM

Reading Time: < 1 minute

I recently wrote a complex script…script to come soon with detail..

This script will create LEVEL 0 backup and copy backup files from ASM to ASM w/o human intervention.

This script will do following tasks:

* Backup database with Level 0
* Validate database backup set
* Verify backup set files
* Build asm copy command
* Copy Backup set files from Source ASM to Destination ASM

Please ping me to get copy of script.

Exadata new bug with 11.2.0.2.x

Reading Time: < 1 minute

New bug was discovered with EXADATA 11.2.2.4.2 in regards to OFA infiniband drivers…details to come soon…

 

Trace file  contents:

System state dump requested by (instance=1, osid=31982), summary=[SYSTEMSTATE_GLOBAL: global system state dump request (kjdgdss_g)].

[ DISKMON][30223] dskm_dump_all()+281 call kgdsdst() 000000000 ? 000000000 ?
[ DISKMON][30223] dskm_async_handler( call dskm_dump_all() 000000000 ? 000000000 ?
[ DISKMON][30223] __sighandler() call dskm_async_handler( 000000000 ? 000000000 ?
[ DISKMON][30223] __poll()+102 signal __sighandler() 7FFFF0F05598 ? 000000001 ?
[ DISKMON][30223] skgznp_accept()+120 call __poll() 7FFFF0F05598 ? 000000001 ?
[ DISKMON][30223] dskm_main()+3052 call skgznp_accept() 01DDBF320 ? 2AAAAC022910 ?
[ DISKMON][30223] __do_global_ctors_a call dskm_main() 00000DDEB ? 00000001D ?
[ DISKMON][30223] __libc_start_main() call __do_global_ctors_a 00000DDEB ? 00000001D ?

Hadoop HDFS database

Reading Time: < 1 minute

I have recently start to mess with Hadoop HDFS database system.

Here is what i plan to do:

  1. Create stand alone HDFS
  2. Create 3 node HDFS cluster
  3. Load test on both single node and HDFS
  4. load test database with Oracle, MySQL and HDFS databases

I should have identical hardware to perform load test…keep checking for setup and results

Also, i am interested to know how many people are interested in Hadoop.

 

Thanks,

 

 

[polldaddy poll=5966024]