Terraform Cloud Series – Part 4 (remote state)

Reading Time: 2 minutes

Continuing from where left off, In this post, I will discuss how to tap into workspace state file.

In the previous post, we connected workspace dependency allowing execution of child workspace, however, in some cases stack requires fetching data sources in order to cross-reference the resource name, id, etc. allowing us to make terraform code more usable and flexible.

Let’s look at an example of how to pull data from a remote state file stored in the Terraform cloud.

If we look at the execution flow in the previous post, We executed 1-poc-network and stack trigger executed 2-poc-security-groups, but when we execute 2-poc-security-groups it requires vpc_id created in 1-poc-network. So, let’s look at the code and break it down a bit.

module "vote_service_sg" {
  source = "terraform-aws-modules/security-group/aws"
  name        = "access-security-group"
  description = "Security group for user-service with custom ports open 
  within VPC, and PostgreSQL publicly open"
  vpc_id      = "VPC_ID" # --> VPC ID associating Security group to VPC
  ingress_cidr_blocks      = ["10.10.0.0/16","10.10.105.0/24","78.1.10.100"]
  ingress_rules            = ["https-443-tcp"]
  ingress_with_cidr_blocks = [
    {
      from_port   = 8080
      to_port     = 8090
      protocol    = "tcp"
      description = "User-service ports"
      cidr_blocks = "10.10.0.0/16"
    },
    {
      rule        = "postgresql-tcp"
      cidr_blocks = "0.0.0.0/0"
    },
  ]
  tags = var.default_tags
}

Looking at line # 6, notice we have to provide VPC ID every time this code is to be executed.

vpc_id      = "VPC_ID" # --> VPC ID associating Security group to VPC

If we were to change or add this as variable, it will work, but requires someone to find the VPC ID and input the value; a lot of work!

What if we can fetch the data from the previous stack and let terraform figure this out. We need to add the following code block to our terraform stack:

data "terraform_remote_state" "vpc" {
  backend = "remote"
  config = {
    organization = "securectl-poc"
    workspaces = {
      name = "1-poc-network"
    }
  }
}

Let me explain how to interpret the remote state:

data "terraform_remote_state" "vpc" {
  backend = "remote"

The section above indicates that we are setting a remote state called “vpc” and with the backend type of remote.

  config = {
    organization = "securectl-poc"
    workspaces = {
      name = "1-poc-network"

And in the section above, we are setting up our config’s allowing us to fetch the needed data from a remote state file. Notice that there are two required inputs that are needed.

  • organization
  • workspace name

Now that we have our remote-state setup let’s change the code to fetch data from the remote state:

data "terraform_remote_state" "vpc" {
  backend = "remote"
  config = {
    organization = "securectl-poc"
    workspaces = {
      name = "1-poc-network"
    }
  }
}

module "vote_service_sg" {
  source = "terraform-aws-modules/security-group/aws"
  name        = "access-security-group"
  description = "Security group for user-service with custom ports open 
  within VPC, and PostgreSQL publicly open"
  vpc_id      = data.terraform_remote_state.vpc.outputs.vpc_id.vpc_id
  ingress_cidr_blocks      = ["10.10.0.0/16","10.10.105.0/24","78.1.10.100"]
  ingress_rules            = ["https-443-tcp"]
  ingress_with_cidr_blocks = [
    {
      from_port   = 8080
      to_port     = 8090
      protocol    = "tcp"
      description = "User-service ports"
      cidr_blocks = "10.10.0.0/16"
    },
    {
      rule        = "postgresql-tcp"
      cidr_blocks = "0.0.0.0/0"
    },
  ]
  tags = var.default_tags
}

Notice that vpc_id now points to a data value of remote-state file within workspace 1-poc-network.

data.terraform_remote_state.vpc.outputs.vpc_id.vpc_id

As you can see how our code is re-useable allowing us to extract output information from remote-state.

Using this method, we can create dependency within our terraform stack allowing us to use the remote state for extracting required attributes. I hope this helped you understand how the backend/remote state works, try it out yourself!

Terraform Cloud Series – Part 3 (Connect Workspace Trigger)

Reading Time: 4 minutes

In the previous series, we covered how to get started with Terraform Cloud and setup VCS with our source repository, In this post, we will look at how we can use “trigger” capability for dependent workspaces/stack.

For the purpose of the demo, I will create the following resources by using the trigger feature of the TF cloud and in the following order of stack:

  • 1-poc-network
  • 2-poc-security-groups
  • 3-poc-buckets

1-poc-network

This will create the required network i.e VPC, subnets, IGW, SG resource to create AWS EC2 instance and other resources that require a network.

2-poc-security-groups

This will create an application-specific security group for the purpose of the demo

3-poc-buckets

Additional resources needed to support the application i.e. S3 buckets, policies, etc.

Putting it all together, here how the visualization looks like below:

Essentially, we are sort of creating a job dependency, but my experience with the trigger has been mixed as there seem to be a lot of limitations with checks and balances. In my opinion, the workflow above is good for a repeatable process where you don’t care if the process is executed multiple times and you expect the same result every time regardless of number of executions.

What I experienced is that if you run into an error with a parent job during the apply phase, TF cloud will still trigger the jobs downstream, hence there seems to be duplication or no way to tell the downstream job if the parent fails. However, regardless of limitations, it is still good feature allowing you to setup simple chaining. If you need a more task-driven setup, in my opinion, Gitlab CI/CD is a better tool.

Now let’s look at the workspaces and how to setup the trigger for child jobs:

If we look at the 1-poc-network workspace, under the run trigger option we have the option to attach child workspace.

Note: Even if the run trigger is setup, a child job can be executed by itself or via VCS code commit.

Notice that I don’t have trigger setup on the parent job, that is because trigger needs to be executed from 2-poc-security-groups when 1-poc-network executed! and yes, I know it is confusing as it took me by surprise too!

So, let’s look at the trigger properties for 2-poc-security-groups:

So, basically we are saying when 1-poc-network job is executed, TF Cloud should also execute 2-poc-security-groups. Now, let’s also look at the 3-poc-buckets:

Now you get the idea of how the flow works! Also, if you are planning on taking the Hashicorp Terraform Associate exam, knowing TF cloud knowledge is a plus and will help pass the exam. I will do another post on TF Associate exam,

Trigger the parent job

Now – let me trigger the job (in this case from git repo commit) – as soon as I commit Terraform job is scheduled and executed

Noticed, it picked up the trigger and TF cloud will execute the dependent workspace after the apply is completed for the source job.

Similarly like before, 2-poc-security-group also detected the downstream trigger:

Now, noticed that there is nothing to do as my bucket was already created. However, I changed the name on the bucket in repo and the job still executed independently.

Conclusion

Terraform workspace trigger feature allows users to create stack dependency when working with a large stack. This good method when the user needs to create multiple workspaces connected and suppose you may be changing dependent resources that require complete teardown and re-create.

Resuming blogging!

Reading Time: < 1 minuteBack in Action!

It has been a while since I posted anything, I have been busy learning new things. I will share my journey in multi-part series.

I will first start with how tech industry is changing and how individuals needs to evolve and learn new skill sets.

My days typically have been very tedious and running 100 MPH trying to get things done. Sometimes I feel very exhausted and want to just give-up, but my love for technology and solving problems always pull me back.

As of last summer, I completed my AWS solution architect certification and hoping to build on Amazon skill set. I am amazed with AWS platform and enjoying learning new things everyday.

I will start to post my learning and challenges that I have gone through, also I would like to start two-way conversion and understand what readers like me to focus on bit more.

Hope to see you soon!

Thanks,

Oracle RAT Step by Step -Part 3

Reading Time: 3 minutesReplay workload on target database – Part 2

 

Prepare Capture data

Copy Capture data

Copy the capture files from source system(Production) to the directory created in above step.

 

Preprocessing Capture data

 

 

BEGIN

dbms_workload_replay.process_capture ( capture_dir => ‘<REPLAY_DIR_NAME_WHERE_CAPTURE_FILES_BEEN_STAGED>’ );

END;

/

 

Example:

BEGIN

dbms_workload_replay.process_capture ( capture_dir => ‘REPLAY_TGT_1’ );

END;

/

 

— Monitor time to process data –-

 

The statements below will give EST time remaining, and data set size in MB which has been processed. The statements below and only informational.

 

 

SELECT dbms_workload_replay.process_capture_remaining_time FROM dual;

SELECT dbms_workload_replay.process_capture_completion FROM dual;

select sum(bytes_done/1024/1024/1024) as GB from wrr$_processing_progress;

 

 

example Output:

 

SQL> SELECT dbms_workload_replay.process_capture_remaining_time FROM dual;

 

PROCESS_CAPTURE_REMAINING_TIME

——————————

5.55503472

 

 

SQL> SELECT dbms_workload_replay.process_capture_completion FROM dual;

 

PROCESS_CAPTURE_COMPLETION

————————–

4.47761194

 

In Oracle 11.2.0.3 the PROCESS_CAPTURE procedure creates a new subdirectory called pp11.2.0.3.0 in the replay directory containing the following files:

  • wcr_calibrate.html
  • wcr_commits.extb
  • wcr_conn_data.extb
  • wcr_data.extb
  • wcr_dep_graph.extb
  • wcr_login.pp
  • wcr_process.wmd
  • wcr_references.extb
  • wcr_scn_order.extb
  • wcr_seq_data.extb

 

 

Work Load analyzer

 

Java program that analyses a workload capture directory

java -classpath $ORACLE_HOME/jdbc/lib/ojdbc5.jar:$ORACLE_HOME/rdbms/jlib/dbrparser.jar:$ORACLE_HOME/rdbms/jlib/dbranalyzer.jar: oracle.dbreplay.workload.checker.CaptureChecker <OS location of Capture Staged files> jdbc:oracle:thin:@<Hostname>:<Listener Port>:<Service_name/SID>

 

 

Example:

java -classpath $ORACLE_HOME/jdbc/lib/ojdbc5.jar:$ORACLE_HOME/rdbms/jlib/dbrparser.jar:$ORACLE_HOME/rdbms/jlib/dbranalyzer.jar: oracle.dbreplay.workload.checker.CaptureChecker /export/home/oracle/rat/target_db jdbc:oracle:thin:@hostname1-vip:1521:TARGET

 

—Results are stored in capture directory in the following files:

<REP_DIR>/wcr_cap_analysis.html

<REP_DIR>/wcr_cap_analysis.xml

–END—

Initialize Replay

 

 

–Get Capture name

 

set lines 150

col NAME for a45

select id, NAME, DBNAME, DBVERSION, STATUS, to_char(START_TIME, ‘MM/DD/YYYY HH24:MI:SS’) as START_TIME , to_char(END_TIME, ‘MM/DD/YYYY HH24:MI:SS’) as END_TIME from DBA_WORKLOAD_CAPTURES

;

BEGIN

dbms_workload_replay.initialize_replay ( replay_name => ‘<REPLAY_NAME>’,

replay_dir => ‘<REPLAY_DIRECTORY_NAME>’ );

END;

/

 

Example:

BEGIN

dbms_workload_replay.initialize_replay ( replay_name => ‘REPLAY_QECMTRP_1’,

replay_dir => ‘REPLAY_TGT_1’ );

END;

/

 

—Known to populate the following base tables:

WRR$_REPLAYS

WRR$_CONNECTION_MAP

–END—

Generate Re-Map Connection

 

–get connection info

set pages 100

set lines 150

col CAPTURE_CONN for a50

col REPLAY_CONN for a50

spool save_pre_remap.out

select * from dba_workload_connection_map;

spool off

 

— SQL will generate re-map connection SQL which will need to be executed to connect to REPLAY HOST —

Warning: This is important steps and verification must be done before proceeding to next step.

 

 

 

set lines 189

set echo off

set head off

set term off

set long 200

set feedback off

spool /tmp/remap_replay_connection.sql

SELECT ‘EXEC dbms_workload_replay.remap_connection (connection_id=>’||a.conn_id||’,replay_connection => ”(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=<TARGET DB LISTENER HOST>)(PORT=<LISTENER-PORT>)) (CONNECT_DATA=(SID=<DB SID/SERVICE NAME>)))”);’

FROM dba_workload_connection_map a, dba_workload_replays b WHERE a.replay_id = b.id AND b.status =’INITIALIZED’ ORDER BY a.conn_id;   spool off

 

 

Example:

 

 

Sqlplus –s

set lines 250

set echo off

set head off

set term off

set long 2000

set feedback off

spool /tmp/remap_replay_connection.sql

SELECT ‘EXEC dbms_workload_replay.remap_connection (connection_id=>’||a.conn_id||’,replay_connection => ”(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=Hostname1-vip)(PORT=1521)) (CONNECT_DATA=(SID=TARGET1)))”);’

FROM dba_workload_connection_map a, dba_workload_replays b WHERE a.replay_id = b.id AND b.status =’INITIALIZED’ ORDER BY a.conn_id;

spool off

exit;

 

 

 

Execute Re-Map connection script

In this step execute script that was generated above and verify the results

Execute remap script

SQL> @/tmp/remap_replay_connection.sql

 

Validate Replay Connections

  • –Save Post re-map connection info

 

set pages 100

set lines 150

col CAPTURE_CONN for a50

col REPLAY_CONN for a50

spool ${ORACLE_SID}_post_remap_connection.out

select * from dba_workload_connection_map;

spool off

 

  • –Get Count of null connections

 

select count(*) from dba_workload_connection_map where REPLAY_CONN is null;

 

–if count is > 0 then, generate new connection re-map script and run again

 

 

  • –Get count by connection name and status

 

set pages 100

set lines 180

col CAPTURE_CONN for a50

col REPLAY_CONN for a50

SELECT count(*) , a.REPLAY_CONN, a.REPLAY_ID , b.NAME, b.STATUs

FROM dba_workload_connection_map a, dba_workload_replays b WHERE a.replay_id = b.id group by a.REPLAY_CONN, a.REPLAY_ID , b.NAME,b.STATUs;

 

set pages 100

set lines 150

col CAPTURE_CONN for a50

col REPLAY_CONN for a50

select count(*) , REPLAY_CONN, REPLAY_ID from dba_workload_connection_map group by REPLAY_CONN, REPLAY_ID ;

Oracle RAT Step by Step -Part 3

Reading Time: 2 minutesReplay workload on target database – Part 1

Restore Target database

 

RMAN or export/import can be used to stage target/replay database. To restore database using RMAN please refer to Oracle RMAN documentation.

Drop Db_links

 

Drop db_links that may be pointing to external systems or re-point db_links to test systems.

set lines 150

col db_link for a40

select OWNER,DB_LINK,USERNAME,to_char(CREATED, ‘MM/DD/YYYY HH24:MI:SS’) as CREATED from dba_db_links;

 

–generate drop statement for public database links

sqlplus -s / as sysdba <<EOF

set feedback 0 pages 0 lines 132 echo on timing off

set echo off

set term off

spool /tmp/drop_db_links.sql

 

select ‘DROP ‘||OWNER||’ DATABASE LINK ‘||DB_LINK||’ ; ‘ from dba_db_links where OWNER=’PUBLIC’;

 

spool off

EOF

 

Note: If db_links can’t be dropped using “drop database link” use below.

 

Check sys db_link tables –

select * from sys.link$;

select db_link, username, host from user_db_links;

 

Delete records manually –

delete from sys.link$;

commit;

 

Gather Schema Stats

 

BEGIN

DBMS_STATS.gather_schema_stats (ownname => ‘SCHEMA1’ ,

cascade =>true,

estimate_percent => dbms_stats.auto_sample_size,

method_opt => ‘FOR ALL COLUMNS SIZE AUTO’,

degree => 8);

END;

/

 

 

BEGIN

DBMS_STATS.gather_schema_stats (ownname => ‘SCHEMA2’ ,

cascade =>true,

estimate_percent => dbms_stats.auto_sample_size,

method_opt => ‘FOR ALL COLUMNS SIZE AUTO’,

degree => 8);

END;

/

 

 

 

 

BEGIN

DBMS_STATS.gather_schema_stats (ownname => ‘SCHEMA3’,

cascade =>true,

estimate_percent => dbms_stats.auto_sample_size,

method_opt => ‘FOR ALL COLUMNS SIZE AUTO’,

degree => 8);

END;

/

 

 

Create Flashback Restore Point

 

To replay capture data multiple times, Oracle database flashback will allow us to restore database to SCN # without having to use RMAN restore.

—Check for existing restore point

 

set linesize 121

col name format a15

col time format a32

 

SELECT name, scn, time, database_incarnation#, guarantee_flashback_database, storage_size

FROM gv$restore_point;

–CREATE GUARANTEED RESTORE POINT

 

 

CREATE RESTORE POINT <RESTORE POINT NAME> GUARANTEE FLASHBACK DATABASE;

 

Example:

 

CREATE RESTORE POINT PRE_REPLAY_RAT GUARANTEE FLASHBACK DATABASE;

 

 

 

 

 

 

 

Create Directory

 

–Check the Directory

 

set lines 1024;

col DIRECTORY_PATH for a90

 

select * from dba_directories;

 

—CREATE RAT REPLAY DIRECTORY

 

 

create or replace directory <DIRECTORY NAME> as ‘<DIRECTORY LOCATION>’;

 

example : create or replace directory REPLAY_TGT_1 as ‘/export/home/oracle/rat/traget_db’;

 

 

SSD test b/w Samsung and Patriot 128GB

Reading Time: < 1 minute

Speed test b/w Samsung and Patriot 128GB:

System 1 : Patriot 128GB:

 

[root@hws03 SSD1]#  hdparm -Tt /dev/sdd

/dev/sdd:
 Timing cached reads:   14160 MB in  2.00 seconds = 7083.45 MB/sec
 Timing buffered disk reads:  844 MB in  3.00 seconds = 281.25 MB/sec
[root@hws03 SSD1]#

[root@hws03 SSD1]# dd if=/dev/zero of=/SSD1/ssdtest bs=512k count=1k
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 0.517184 seconds, 1.0 GB/s
[root@hws03 SSD1]#

System 2 : Samsung 128GB:

 

[root@hws04 SSD1]# hdparm -Tt /dev/sda

/dev/sda:
 Timing cached reads:   13644 MB in  2.00 seconds = 6826.51 MB/sec
 Timing buffered disk reads:  476 MB in  3.00 seconds = 158.63 MB/sec

[root@hws04 SSD1]# dd if=/dev/zero of=/SSD1/ssdtest bs=512k count=1k
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 0.592914 seconds, 905 MB/s
[root@hws04 SSD1]#

 

 

Steps to move OMS agent to new OMS repository

Reading Time: < 1 minuteSteps below can be used to move OMS Agent from OLD to NEW OMS repository w/o un-installing and re-installing…

On Agent machine/box:

Stop agent: emctl stop agent
Modify emd.properties : Change REPOSITORY_URL and emdWallSrcUrl to new OMS repository Cleanup following files and dir: under $ORACLE_HOME/sysman/emd/upload and $ORACLE_HOME/sysman/emd/state
$ORACLE_HOME/sysman/emd/lastupld.xml
$ORACLE_HOME/sysman/emd/agntstmp.txt
$ORACLE_HOME/sysman/emd/protocol.ini

On OLD OMS database repository find agent that needs to be removed and use package below to remove/clean it up:

SQL> select target_name from mgmt_targets where target_name like
SQL> '%app%';

TARGET_NAME
--------------------------------------------------------------------------------
vwappp005.Shared
vwappp006.Shared
vwappp014.Shared
vwappt001.Shared
vxappp003
vwappp005.Shared:3872
wappp006.Shared:3872
wappp014.Shared:1830
wappp014.Shared:3872
wappt001.Shared:3872
xappp003:3872

11 rows selected.

SQL> exec mgmt_admin.cleanup_agent('vwappt001.Shared:3872');

PL/SQL procedure successfully completed.

On Agent box:

Clear state agent: emctl clearstate
agent Secure agent: emctl secure agent
Start agent: emctl start agent
Upload agent : emctl upload agent

Once, all above steps are done agent should appear on new OMS repository..

PL/SQL Procedure to compare dates

Reading Time: < 1 minuteHello All,

I haven’t updated my blog in a while, so here is what i have been working on:

I was asked to put together pl/sql procedure which can compare the date and use the output to do something:

This Procedure takes two date input from user and compare if date is less then 1 day old(value can be adjusted according to business requirement).

CREATE OR REPLACE PROCEDURE TEMP1(sdate in varchar2, edate in varchar2)
IS

v_sdate DATE;
v_edate DATE;
–r_exception     EXCEPTION;

BEGIN
DBMS_OUTPUT.put_line ( ‘Process Start Time: ‘
||TO_CHAR (SYSDATE, ‘MM/DD/YYYY HH24:MI:SS’)
);
BEGIN
select to_date(sdate, ‘YYYY-MM-DD:HH24:MI’)
into v_sdate
from dual;

select to_date(edate, ‘YYYY-MM-DD:HH24:MI’)
into v_edate
from dual;
if (v_edate – v_sdate >2 )
–IF (SYSDATE – v_sdate > 60)
THEN
—  RAISE r_exception;
raise_application_error(-20001,’An error was encountered – Provide correct date’);
–dbms_output.put_line(‘BAD DATE PROVIDED’);
–RASIE_APPLICATION_ERROR(-2100, ‘DATE issue’);
End IF;

DBMS_OUTPUT.put_line (‘START DATE: ‘|| v_sdate);
DBMS_OUTPUT.put_line (v_edate);
end;
END;
/