Wednesday, September 25, 2019

GoldenGate : setup Bi-Directional Replication in Multitenant environment (PDBs)


Intro


GoldenGate replication solution allows you to capture, replicate, filter, transform, and deliver real time transactional data across Oracle or heterogeneous environments.
Goldengate 12c features along with Oracle multitenant architecture introduced some changes to the way replication works (see Fig-1). Another interesting fact is that few of Oracle Streams features have now found their way into GoldenGate.

This article describes how to configure Oracle GoldenGate 12c in order to perform a Bi-directional replication between Oracle source and target Container Database 12c. We will be using Integrated Replicat and Integrated Extract since
Integrated mode extraction is the only supported method for Multitenant databases.  


Multitenant specificity 


Here are some major changes regarding GoldenGate coupled with Oracle Database 12c multitenant architecture:

  • One Extract process can be configured to capture changes from multiple Pluggable Databases (same redo stream is shared by all PDBs ).
  • Replicat process can only connect and apply to one Pluggable Database (PDB).
  • Extract must operate in integrated capture mode (only supported mode for multitenant DBs), a log mining server is involved and this is only accessible from the root container (CDB$ROOT).
  • We have to connect as a common user to attach to the logminer on the server.


             Fig1- Capture and apply process from multiple PDBs (integrated mode).

What will be covered ?

In this post we will focus on the following main steps:

I. GoldenGate Environment Configuration
II. Initial load
III. Apply change data 
IV. Bi-directional replication 

I. GoldenGate Environment configuration                                            Go to Top⭡

 
        My lab is based on the below source and target systems. Both contain similar Pluggable database called ”PDB
                                                               Fig2- Summary of the source and target systems

      A- Installation

        
        GoldenGate installation is pretty straightforward. Make sure you specify the right GGate and database homes

Fig3- Goldengate installation setting

          Note : If there is only one GoldenGate installation for multiple DBs you will have to set those variables in the extract 
          and replicat processes directly as below :

SETENV (ORACLE_HOME = "path to Oracle home location")
SETENV (ORACLE_SID = "SID")
    

    B-Preparation                                                                                                                         Go to Top⭡

         An extract process for a multitenant database must be created at the root container level with a "common" database  
         user and must be defined to run in the "integrated" capture mode. Replicats, on the other hand, must be created at
         the pluggable database level and can be defined to run in either the "classic" or "integrated" modes.

        1. On the source system (MONTREAL)     

           » Create source OGG admin User                            

@MTL> CREATE user c##gg_admin identified by gg_adminin  ;   
@MTL> ALTER USER gg_admin quota unlimited ON users; 
           - Assign the correct source privileges to the GoldenGate admin user as shown below 
@MTL> GRANT CREATE SESSION,CONNECT,RESOURCE,ALTER SYSTEM, select any dictionary,FLASHBACK ANY TABLE to c##gg_admin container=all; 
@MTL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee=>'c##gg_admin', container=>'all',privilege_type=>'CAPTURE',grant_select_privileges => TRUE,
do_grants => TRUE,grant_optional_privileges=>'*'); -- minimum privs
-- If you want both Capture(extract) and apply(replicat) privileges 
@MTL>
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('C##GG_ADMIN',container=>'ALL');
SQL> grant dba to c##ggadmin container=all; –- optional

           » Add supplemental log data for update operations

@MTL> ALTER USER c##gg_admin set container_data=all container=current;
@MTL> GRANT ALTER any table to c##gg_admin container=ALL; --"ADD TRANDATA"
@MTL> ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE SCOPE=BOTH;
@MTL> ALTER DATABASE FORCE LOGGING;
@MTL> ALTER SESSION SET CONTAINER=PDB;
@MTL> ALTER PLUGGABLE DATABASE ADD SUPPLEMENTAL LOG DATA; -- minimum logging

        2. On the TARGET system   (TORONTO)

           » Create target OGG admin User                            

@TOR> CREATE user c##gg_admin identified by gg_adminin  ;  
@TOR> ALTER USER gg_admin quota unlimited ON users;
@TOR> GRANT unlimited TABLESPACE TO c##gg_admin container=all; –sometimes needed
@TOR> ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE SCOPE=BOTH;

           - Assign the right target privileges to the GoldenGate admin user as shown below 

@TOR> GRANT create session,CONNECT,RESOURCE,ALTER SYSTEM,select any dictionary  to c##gg_admin container=all; 
@TOR> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee=>'c##gg_admin', container=>'all',privilege_type=>'APPLY',grant_select_privileges => TRUE,do_grants => TRUE,grant_optional_privileges=>'*',
container='PDB'); -- minimum privs
@TOR> grant dba to c##ggadmin container=all;
-- If you want both Capture(extract) and Apply(replicat) privileges on all PDBs (Bidirectional)
@TOR>
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('c##gg_admin',container=>'ALL');

II. Initial Load                                                                                         Go to Top⭡

        This is basically the first stage where the extract captures data directly from source tables and without using trail files.
      
         Important: First make sure that both source and target PDB open state is permanent before setting up any replication

@SQL> ALTER SESSION SET CONTAINER=PDB;       --- on Both src/target
@SQL> ALTER PLUGGABLE DATABASE OPEN;
@SQL> ALTER PLUGGABLE DATABASE SAVE STATE;

     A- Prepare the target Tables

           » (Re)Create an empty sample schema “SCOTT” on the target PDB  (download script here) 

@TOR> ALTER SESSION SET CONTAINER=PDB;
@TOR> @scott_empty.sql

           Note : You can run this script @scott.sql in the source PDB if SCOTT schema is not yet created

     B- Configuration

        1. On the source system (MONTREAL)     
     
           » Configure OGG User alias and schema logging

oracle@montreal:$GG_HOME# ggsci
GGSCI > DBLOGIN USER c##gg_admin@MTL password ggadmin
GGSCI(CDB$ROOT)> ALTER credentialstore add user c##gg_admin@MTL alias gg_admin
GGSCI(montreal/CDB$Root)> add schematrandata pdb.scot       --log extra columns

           Note :  This lab assumes that tns aliasses have been created for both source and target database and PDBs
            @MTL= montreal ,  @TOR=toronto ,    @PDB= PDB in each server respectively
          
           » Allow incoming access to the Manager process 

GGSCI(montreal/CDB$ROOT)> EDIT PARAMS mgr                                
PORT 7809
accessrule, prog *, ipaddr  192.168.78.55, allow
PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPFILES 2, FREQUENCYHOURS 6
GGSCI(montreal/CDB$ROOT)> START mgr

           » Create an initial Extract parameter file 

GGSCI(montreal/CDB$ROOT)> EDIT PARAMS Capture1                                
extract capture1
useridalias gg_admin                             
rmthost toronto.evilcorp.com, mgrport 7809
rmttask replicat, group apply1
SOURCEISTABLE    
SOURCECATALOG PDB -- Specify source PDB
TABLE SCOTT.* ;  -- All tables from Scott schema

» Add and register the initial extract with the container database

GGSCI(montreal/CDB$ROOT)> REGISTER EXTRACT Capture1 DATABASE CONTAINER (PDB)
GGSCI(montreal/CDB$ROOT)> ADD EXTRACT Capture1, SOURCEISTABLE          

        2. On the TARGET system   (TORONTO)   
                                    
           » Add OGG User alias PDB level (can be run from root CDB too)

oracle@toronto:$GG_HOME# ggsci 
GGSCI> DBLOGIN USER c##gg_admin@PDB password ggadminPDB
GGSCI(toronto)>ALTER credentialstore add user c##gg_admin@PDB alias gg_adminPDB

          » Allow incoming access to the Manager process 

GGSCI(toronto/CDB$ROOT)> EDIT PARAMS mgr                                
PORT 7809
accessrule, prog *, ipaddr  192.168.78.54, allow
PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPFILES 2, FREQUENCYHOURS 6
GGSCI(toronto/CDB$ROOT)> START mgr

           » Create a Replicat parameter file

GGSCI (toronto/PDB)> EDIT PARAMS apply1                               
replicat apply1
useridalias gg_adminPDB
HANDLECOLLISIONS
SOURCECATALOG PDB
map PDB.SCOTT.*, target PDB.SCOTT.*;        

           » Add the integrated replicat    

<GGSCI(toronto/PDB)> ADD REPLICAT apply1, SPECIALRUN

           » Start the extract process on the source system  

<GGSCI(montreal/CDB$ROOT)> START EXTRACT capture1

             This will allow target scott schema to be synchronized in few seconds .

             Note:  In case datatypes weren’t similar, a definition file is required to handle the mapping using DEFGEN utility 
    

III. Apply Change Data                                                                         Go to Top⭡

       Extract process will pull data from the PDB and send it across the network to the target system. Once data is
       written to target trail files, the integrated replicat will convert it to LCRs which will be applied to target PDB by the
       inbound server. 
                                           Description of integratedapply.png follows                                                                                Fig4- Integrated Replicat

     A- Configuration

        1. On the source system (MONTREAL)     

           » Create Integrated Primary extract

<GGSCI(montreal/CDB$ROOT)> edit params cap_scot
EXTRACT Cap_scot                      
UserIdAlias gg_admin
LOGALLSUPCOLS -- Capture before image of supplementally logged columns
UPDATERECORDFORMAT COMPACT -- Combines before/after images of UPDATES in trail
DISCARDFILE ./dirrpt/cap_scot.dsc,purge
EXTTRAIL ./dirdat/es                   
SOURCECATALOG PDB
TRANLOGOPTIONS EXCLUDETAG 00 -- To avoid extracting incoming replicated data
DDL INCLUDE MAPPED OBJNAME SCOTT.*
TABLE PDB.SCOTT.*;                 

             Note : TRANLOGOPTIONS is only required for bidirectional replication . I added it in purpose here.   
        
            »
Add and register the integrated extract with the container database

<GGSCI(montreal)> REGISTER EXTRACT cap_scot DATABASE CONTAINER(PDB)
<GGSCI(montreal)> ADD EXTRACT cap_scot,INTEGRATED TRANLOG, begin now  

           » Create a trail for the Extract group and start it

<GGSCI(montreal)> ADD EXTTRAIL ./dirdat/es , EXTRACT cap_scot  
<GGSCI(montreal)> START EXTRACT cap_scot

           » Create a SECONDARY EXTRACT DATA PUMP  (recommended). 
             This is an optional component for our primary extract helping to move the trail over the network.

<GGSCI(montreal)> edit params e_pump                   
EXTRACT e_pump
PASSTHRU                                           
PASSTHRUMESSAGES                                   
RMTHOST toronto.evilcorp.com ,MGRPORT 7809         
RMTTRAIL  ./dirdat/rp                              
TABLE PDB.SCOTT.* ;

           » Create and start a data pump extract

<GGSCI(montreal)> ADD EXTRACT e_pump, EXTTRAILSOURCE ./dirdat/es, BEGIN NOW     <GGSCI(montreal)> ADD RMTTRAIL ./dirdat/rp, extract e_pump
<GGSCI(montreal)> START EXTRACT e_pump

        2. On the Target system (TORONTO)   
          
           » Create a Replicat parameter file

<GGSCI(TORONTO)> edit params rep_scot
replicat rep_scot                                   
useridalias gg_adminPDB                                 discardfile ./dirrpt/rep_scot.dsc, append
discardrollover on sunday
SOURCECATALOG PDB
HANDLECOLLISIONS SCOTT.*  
map PDB.SCOTT.*, target PDB.SCOT.*;

            Note: No need to create a checkpoint table for the integrated replicat unless Data Guard is configured
          » Add and start the integrated replicat      

<GGSCI(TORONTO)> ADD REPLICAT rep_scot,integrated exttrail ./dirdat/rp 
<GGSCI(TORONTO)> START rep_scot

            - Remote trail prefix should be the same as specified in the data pump parameter file (rp)        

          » We can now verify that the data is replicated to the target by performing a little update on the source database  

Scott@PDB:Montreal> Update salgrade set HISAL=1000000 where grade=5; 
Scott@PDB:montreal> Commit;
-- Check the rows in target Database
Scott@PDB:Toronto> Select * from salgrade;
   GRADE      LOSAL      HISAL
---------- ---------- ----------
         1        700       1200
         2       1201       1400
         3       1401       2000
         4       2001       3000
         5       3001    1000000 ---> update replicated (initial value 9999)


IV. BI-Directional Replication (active-active)                                     Go to Top⭡

    When implementing a bidirectional configuration, you must consider the following areas to avoid data integrity issues:        
         • Loop detection ( ping pong data behavior)
         • Conflict detection and resolution (CDR) when same column is modified at a same time on both systems
         • Oracle Triggers (Replicat data triggering DML operations on target system)

       Golengate12c already handles the above issues as
         - EXCLUDETAG parameter ignores replicat redo stream during the extraction (No Loops)
         - LOGALLSUPCOLS parameter supports CDR
         - Integrated Replicat  doesn’t require disabling dml triggers on the target system.   

A- Configuration

       As Extract and Replicat processes will be on both servers, I will refer to Montreal as target and Toronto as source.

       1. On the source system (TORONTO)   

          » Add supplemental log data for update operations

@TOR> ALTER user c##gg_admin set container_data=all container=current;
@TOR> GRANT ALTER any table to c##gg_admin container=ALL; --"ADD TRANDATA"
@TOR> ALTER DATABASE FORCE LOGGING;
@TOR> ALTER SESSION SET CONTAINER=PDB;
@TOR> ALTER PLUGGABLE DATABASE ADD SUPPLEMENTAL LOG DATA; -- minimum logging

          » Configure OGG User alias and schema logging (root level)

GGSCI > DBLOGIN USER c##gg_admin@TOR password ggadmin 
GGSCI(CDB$ROOT)> ALTER credentialstore add user c##gg_admin@TOR alias gg_admin
GGSCI(Toronto/CDB$ROOT)> add schematrandata pdb.scott --log extra columns

          » Verify that  the right source privileges are granted to GoldenGate admin user  

-- Run this command if it hasn’t been run yet in the new source(Toronto)
@TOR> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('C##GG_ADMIN',container=>'ALL');
          » Create Integrated Primary extract
<GGSCI toronto > edit params to_scot
EXTRACT to_scot                      
UserIdAlias gg_admin                  
LOGALLSUPCOLS -- Capture before image of supplemental log columns
UPDATERECORDFORMAT COMPACT -- Combines before/after images of UPDATES in trail
DISCARDFILE ./dirrpt/to_scot.dsc,purge
EXTTRAIL ./dirdat/et                   
SOURCECATALOG PDB
TRANLOGOPTIONS EXCLUDETAG 00     -- Avoid extracting incoming replicated data
DDL INCLUDE MAPPED OBJNAME SCOTT.*
DDLOPTIONS REPORT                  
TABLE PDB.SCOTT.*;              

          » Add and register the integrated extract with the container database

<GGSCI(toronto)> REGISTER EXTRACT to_scot DATABASE CONTAINER(PDB)
<GGSCI(toronto)> ADD EXTRACT to_scot,INTEGRATED TRANLOG, begin now    

          » Create a trail for the Extract group and start it

<GGSCI (toronto)> ADD EXTTRAIL ./dirdat/et , EXTRACT to_scot  
<GGSCI (toronto)> start extract to_scot

          » Create a SECONDARY EXTRACT DATA PUMP             
               Optional but why not :). 

<GGSCI(toronto)> Edit params to_epump
  EXTRACT to_epump
  PASSTHRU
  PASSTHRUMESSAGES
  RMTHOST montreal.evilcorp.com ,MGRPORT 7809
  RMTTRAIL  ./dirdat/rm
  TABLE PDB.SCOTT.* ; 

           » Create and start a data pump extract

<GGSCI(toronto)> ADD EXTRACT to_epump,EXTTRAILSOURCE ./dirdat/et, BEGIN NOW
<GGSCI(toronto)> ADD RMTTRAIL ./dirdat/rm, extract to_epump
<GGSCI(toronto)> start extract to_epump   

       2. On the Target system (MONTREAL)   

» Verify that the right source privileges are granted to GoldenGate admin user  

-- Run the below commands if it hasn’t been run yet in the new target(Montreal)
@TOR> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('C##GG_ADMIN',container=>'ALL');
SQL> grant dba to c##ggadmin container=all;

» Create an OGG admin User (PDB level)

<GGSCI(montreal)> ALTER credentialstore add user c##gg_admin@PDB alias gg_adminPDB

           » Create a Replicat parameter file

<GGSCI(gg_admin@montreal)> EDIT params  r_mscot
replicat r_mscot                                    
useridalias gg_adminPDB                               
discardfile ./dirrpt/rep_mscot.dsc, purge, megabytes 10
discardrollover on sunday
HANDLECOLLISIONS   
map PDB.SCOTT.*, target PDB.SCOTT.*;

          » Add and start the integrated replicat      

<GGSCI(gg_admin@montreal)> ADD REPLICAT r_mscot,integrated exttrail ./dirdat/rm 
<GGSCI(gg_admin@montreal)> START r_mscot

            - Remote trail prefix should be the same as specified in the data pump parameter file (rm)    

B- Test & Monitoring                                                                                                                 Go to Top⭡

           » Let’s verify if data is now replicated in both directions via an insert on the new source and a DDL on target PDB      

- 1) From TOR to MTL  
Scott@PDB:Toronto> Insert into DEPT values(50,'MARKETING','LOS ANGELES');
Scott@PDB:Toronto> Commit;
-- Check the rows in target Database
Scott@PDB:Montreal> Select * from SALGRADE;
DEPTNO DNAME          LOC
---------- -------------- -------------
       10 ACCOUNTING     NEW YORK
        20 RESEARCH       DALLAS
        30 SALES          CHICAGO
        40 OPERATIONS     BOSTON
        50 MARKETING      LOS ANGELES --> new row replicated

- 2) From MTL to TOR
Scott@PDB:Montreal> alter table BONUS add (Bonus_date date);
-- check
Scott@PDB:Toronto> DESC BONUS
Name              Type
----------------- ------------
ENAME             VARCHAR2(10)
JOB               VARCHAR2(9)
SAL               NUMBER
COMM              NUMBER
 BONUS_DATE        DATE

           » Last but not least : Here are few GGSCI commands that help monitor the status of the replication processes.

-- Info related command 
<GGSCI> Info [extract/replicat/Mgr/extrail|rmtrail path ] ,detail
-- Status and reports
GGSCI(montreal/CDB$ROOT)> stats/status [extract/replicat/mgr]
GGSCI(montreal/CDB$ROOT)> view report [Process]
<GGSCI(montreal/CDB$ROOT)> info all 
Program     Status      Group       Lag at Chkpt  Time Since Chkpt
----------- ----------- ----------- ------------- -----------------
MANAGER     RUNNING
EXTRACT     RUNNING     CAP_SCOT    00:00:04      00:00:08
EXTRACT     RUNNING     E_PUMP      00:00:00      00:00:05
REPLICAT    RUNNING     R_MSCOT     00:00:00      00:00:02
<GGSCI(Toronto/CDB$ROOT)>  info all  
Program     Status      Group       Lag at Chkpt  Time Since Chkpt
----------- ----------- ----------- ------------- -----------------
MANAGER     RUNNING
EXTRACT     RUNNING     TO_EPUMP    00:00:00      00:00:09
EXTRACT     RUNNING     TO_SCOT     00:00:10      00:00:08
REPLICAT    RUNNING     REP_SCOT    00:00:00      00:00:05
<GGSCI(montreal/CDB$ROOT)> info CAP_SCOT        --> Extract A
EXTRACT    CAP_SCOT  Last Started 2019-09-07 17:59   Status RUNNING
Checkpoint Lag       00:00:05 (updated 00:00:03 ago)
Process ID           20232
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2019-09-07 18:08:17
                     SCN 0.2590348 (2590348)
<GGSCI(Toronto/CDB$ROOT)> info REP_SCOT         --> Replicat A
REPLICAT   REP_SCOT  Last Started 2019-09-07 17:59   Status RUNNING
INTEGRATED
Checkpoint Lag       00:00:00 (updated 00:00:09 ago)
Process ID           20244
Log Read Checkpoint  File /u01/app/oracle/product/12.3/ogg/dirdat/rp000000003
                     2019-09-07 17:59:29.594319  RBA 1636
<GGSCI(Toronto/CDB$ROOT)> info TO_SCOT           --> Extract B
EXTRACT    TO_SCOT   Last Started 2019-09-07 18:00   Status RUNNING
Checkpoint Lag       00:00:10 (updated 00:00:03 ago)
Process ID           20270
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2019-09-07 18:06:49
                     SCN 0.2484027 (2484027)
<GGSCI(montreal/CDB$ROOT)> info R_MSCOT          --> Replicat B
REPLICAT   R_MSCOT   Last Started 2019-09-07 18:01   Status RUNNING
INTEGRATED
Checkpoint Lag       00:00:00 (updated 00:00:01 ago)
Process ID           20309
Log Read Checkpoint  File /u01/app/oracle/product/12.3/ogg/dirdat/rm000000005
                     2019-09-07 18:00:16.975866  RBA 1632
                                                                                                                                                                           Go to Top⮉

Sunday, September 15, 2019

Vagrant: Create a Red Hat 8 base box packaged for Oracle 19c (Vbox)


Intro

Vagrant is an opensource tool for working with virtual environments (VirtualBox,HyperV,Vmware..etc). It provides a simple command-line client for managing these environments, using a text-file (VagrantFile) defining the attributes of each environment.  
To know more about vagrant you can try their getting started guide or consult the Vagrant official documentation.

Vagrant cloud already offers a plethora of existing Boxes, from windows to BSD (including numbers of Linux distros. But in case you are looking for a specific setup for your builds, you may need a custom box.
In this article we will create a prepackaged Vagrant box from a existing RedHat 8 base box then uppload/publish it into the Vagrant cloud.
 
Note: This box will include necessary packages to install Oracle 19c Database or grid infrastrcuture software on Rhel8. Feel free to use it in your vagrant builds (see link at the bottom of this post).

Pre-requisites

To create a Vagrant box from scratch there are few requirements to respect regarding the virtual machine  

  • The first network interface (adapter 1) must be a NAT adapter. Vagrant uses this to connect the first

  • VirtualBox Guest Additions must be installed so that things such as shared folders can function.

However, these steps won’t be necessary here since we are customizing an already existing base box.
If you still want to know more about doing this from a new vm, check out Tim Hall’s example.
This tutorial is divided in two parts : I. Create the Oracle 19c ready Vagrant box , II. Share it online

I. Create the Oracle 19c ready Vagrant box from an existing one

1. Import a generic RHEL8 base Box in the local vagrant environment

   
  On a new location in your Computer, Initialize the current directory to be a Vagrant environment and populate our  

  target online RedHat8 base box in the created Vagrantfile.


- Open a cmd box in your system and run the following commands

D:\VM\vagrant> vagrant init generic/rhel8                                                

- Startup the vm 

D:\VM\vagrant> vagrant up

- Stop the vm and attach the RHEL8 iso file to the optical drive in VirtualBox

D:\VM\vagrant> vagrant halt  

- Restart the vm                                      

D:\VM\vagrant> vagrant up                         

2. Create a local yum repository

[root@linuxtechi-rhel8 ~]# mount /dev/cdrom /media                           
[root@linuxtechi-rhel8 ~]# vi /etc/yum.repos.d/rhel8.repo     

[InstallMedia-BaseOS]                                                       
name=Red Hat Enterprise Linux 8 - BaseOS                                    
metadata_expire=-1                                                          
gpgcheck=0                                                                  
enabled=1                                                                   
baseurl=file:///media/BaseOS/                                               

[InstallMedia-AppStream]                                                    
name=Red Hat Enterprise Linux 8 - AppStream                                 
metadata_expire=-1                                                          
gpgcheck=0                                                                  
enabled=1                                                                   
baseurl=file:///media/AppStream/                                     

**2.1: If you don't have the .iso file at your disposal you can use the Online RedHat 8 beta repository to build your
   repo (see below text ). Otherwise skip to next step.

[root@linuxtechi-rhel8 ~]# vi /etc/yum.repos.d/rhel8.repo     
[rhel-8-for-x86_64-baseos-beta-rpms]
name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (RPMs)
baseurl = https://downloads.redhat.com/redhat/rhel/rhel-8-beta/baseos/x86_64/
enabled = 1
gpgcheck = 0

[rhel-8-for-x86_64-appstream-beta-rpms]
name = Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
baseurl = https://downloads.redhat.com/redhat/rhel/rhel-8-beta/appstream/x86_64/ enabled = 1
gpgcheck = 0
- Remove the Red Hat subscription warning (unable to read consumer identity) 
[root@linuxtechi-rhel8 ~]# vi /etc/yum/pluginconf.d/subscription-manager.conf
[main]
enabled=0
- Clean subscription data
[root@linuxtechi-rhel8 ~]# subscription-manager clean
Clear the repository cache by running the following command.
[root@linuxtechi-rhel8 ~]# dnf clean all  -- or yum clean all
- Verify whether Yum / DNF is getting packages from Local Repo
[root@linuxtechi-rhel8 ~]# dnf/yum repolist
Red Hat Enterprise Linux 8 - AppStream      7.1 MB/s | 5.3 MB     00:00
Red Hat Enterprise Linux 8 - BaseOS          24 MB/s | 2.2 MB     00:00
Last metadata expiration check: 0:00:02 ago on Fri 23 Aug 2019 03:40:20 PM UTC.
repo id                  reponame                                status
InstallMedia-AppStream       Red Hat Enterprise Linux 8 - AppStream     4,672
InstallMedia-BaseOS          Red Hat Enterprise Linux 8 - BaseOS        1,658

- Download and Install the 19c preinstall rpm package provided by oracle: Same goes for any oracle preinstall version

[root@linuxtechi-rhel8 ~]# curl -o oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm
[root@linuxtechi-rhel8 ~]# yum -y localinstall oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm
Error:
  Problem: conflicting requests
- nothing provides compat-libcap1 needed by oracle-database-preinstall-19c-1.0-1.el7
- nothing provides compat-libstdc++-33 needed by oracle-database-preinstall-19c-1.0-1.el7

Solution: The above error occurred because the two mentioned rpms were missing from Red Hat 8 package base. We will    
have to install them manually before the 19c preinstall package.   
[root@linuxtechi-rhel8]# curl -o compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm
[root@linuxtechi-rhel8 ~]# curl -o compat-libcap1-1.10-7.el7.x86_64.rpm https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/compat-libcap1-1.10-7.el7.x86_64.rpm
[root@linuxtechi-rhel8]# rpm -ivh compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm
[root@linuxtechi-rhel8]# rpm -ivh compat-libcap1-1.10-7.el7.x86_64.rpm   
- Rerun yum Install on the 19c preinstall package
-- 
[root@linuxtechi-rhel8 ~]# yum -y localinstall oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm Installed products updated.
Installed:
oracle-database-preinstall-19c-1.0-1.el7.x86_64 
ksh-20120801-252.el8.x86_64 libICE-1.0.9-13.el8.x86_64
libSM-1.2.3-1.el8.x86_64                    libX11-1.6.7-1.el8.x86_64                  
libX11-common-1.6.7-1.el8.noarch libX11-xcb-1.6.7-1.el8.x86_64
libXau-1.0.8-13.el8.x86_64                  libXcomposite-0.4.4-14.el8.x86_64
libXext-1.3.3-9.el8.x86_64                  libXi-1.7.9-7.el8.x86_64                    libXinerama-1.1.4-1.el8.x86_64 libXmu-1.1.2-12.el8.x86_64
libXrandr-1.5.1-7.el8.x86_64                libXrender-0.9.10-7.el8.x86_64
libXt-1.1.5-8.el8.x86_64                    libXtst-1.2.3-7.el8.x86_64
libXv-1.0.11-7.el8.x86_64 libXxf86dga-1.1.4-12.el8.x86_64
libXxf86misc-1.0.4-1.el8.x86_64 libXxf86vm-1.1.4-9.el8.x86_64
libdmx-1.1.4-3.el8.x86_64                   libxcb-1.13-5.el8.x86_64                   
xorg-x11-utils-7.5-28.el8.x86_64 xorg-x11-xauth-1:1.0.9-12.el8.x86_64
bc-1.07.1-5.el8.x86_64                      gssproxy-0.8.0-5.el8.x86_64
keyutils-1.5.10-6.el8.x86_64                libaio-devel-0.3.110-12.el8.x86_64         
libverto-libevent-0.3.0-5.el8.x86_64    nfs-utils-1:2.3.3-14.el8.x86_64    
quota-1:4.04-10.el8.x86_64                  quota-nls-1:4.04-10.el8.noarch
rpcbind-1.2.5-3.el8.x86_64                  smartmontools-1:6.6-3.el8.x86_64           
unzip-6.0-41.el8.x86_64

- Add few more packages that will be needed for future oracle installations and builds

# New for OL8/RHEL8
[root@rhel8 ~]# yum instal libnsl
# Other rpms
[root@rhel8 ~]# yum install bind sysstat unixODBC unixODBC-devel binutils zip dnsmasq
- Add the default Vagrant ssh public key to avoid having ssh access issues after packaging your box 
[root@rhel8 ~]# wget --no-check-certificate https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub -O .ssh/authorized_keys
[root@rhel8 ~]# shutdown -h now

3. Package the box using Vagrant

  Once the VM is created and the Vagrant configuration is complete, we can turn the VM into a Vagrant box using the "vagrant
  package" command.

cd D:\VM
D:\VM> vagrant package --base rhel8 --output rhel8_ora.box
==> rhel8: Exporting VM...
==> rhel8: Compressing package to: D:/VM/rhel8_ora.box
- You can now add it locally to the list of available boxes.
D:\VM> vagrant box add D:/VM/rhel8_ora.box --name Scofieldd/rhel8_ora
==> box: Box file was not detected as metadata. Adding it directly...
==> box: Adding box 'scofieldd/rhel8_ora' (v0) for provider:
     box: Unpacking necessary files from: file:///D:/VM/rhel8_ora.box
     box: Progress: 100% (Rate: 437M/s, Estimated time remaining: --:--:--)
==> box: Successfully added box 'scofieldd/rhel8_ora' (v0) for 'virtualbox'!
- Check if the newly added base box is  listed
D:\VM> vagrant box list
generic/rhel8       (virtualbox, 1.9.22)
rhel8               (virtualbox, 0)

II. Upload your custom Boxes via the Vagrant Cloud Web Interface :

   After you've created the .box file, these steps can be followed (the name is rhel8_ora and not
   rhel8_ora1) .                                                         
                                                                                                                         
  1. Go to the Create Box page and name the box and give it a simple description

                
    
   2. Create your first version for the box. This version must match the format [0-9].[0-9].[0-9]                             

                                                                                                  
   3. Create a release version and a provider for the box which is virtualbox. 

                                                                                                                     
    4. Upload the created rhel8_ora.box file for your provider (Virtualbox)                     

 

                                                                                                             
    You can now find  your new box in the Vagrant section of Vagrant Cloud.                  

   To try this box just rerun the vagrant commands using the new box name  (scofieldd/rhel8_ora) .
 
   1. Initialise the vagrant file

D:\VM\vagrant> vagrant init scofieldd/rhel8_ora  
D:\VM\vagrant> vagrant up
I’ll share Oracle builds for this box in my next blog post. Stay tunned ;)

Thursday, August 15, 2019

Clone a RAC 12c VM environment with Virtualbox (import/export)

 

Intro

Have you ever finished a lab in your laptop with several virtual machines connected to each other and thought “Well that was neat! but how can I create a backup in my external hard drive and run it again at work or in any other computer?” The answer lays in this very article. For those of you who have finished any RAC environment lab (i.e Racattack) in VirtualBox, saving images of the created Oracle RAC system and hand it over to another location for a restore could be done in a matter of minutes! 
As matter of fact, Oracle VM VirtualBox can import and export virtual machines in Open Virtualization Format (OVF) which is an industry-standard format that we are going to use in this tutorial.

This tutorial will describe the steps to export then import a RAC environment in Virtualbox through import/export Appliance tools and other commands.

Considerations

The export of a VM is a straightforward process and saving RAC images would be an easy task if it weren’t for the shared asm disks.Therefore asm disks needed a bit more care than the guest systems but all was done in a timely manner. Furthermore, make sure your new computer’s VirtualBox Host Only Ethernet Adapter has the same IP segment than your Vms before importing them.

Note:
The directory path names used during the export/import (C:/,D;/) should be adapted to your own environment.

Follow the below steps and try the GUI alternatives if you find it easier than the command line options.


1. Export Shared disks

     - Detach the disks from RAC VMs:

image
     - Export them to a new location
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" clonemedium disk "C:\VM\RAC lab\london1\asm_1.vdi" "H:\OS\ORACLE\Lab Oracle\asm_1.vdi" --format VDI --variant Fixed
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" clonemedium disk "C:\VM\RAC lab\london1\asm_2.vdi" "H:\OS\ORACLE\Lab Oracle\asm_2.vdi" --format VDI --variant Fixed
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" clonemedium disk "C:\VM\RAC lab\london1\asm_3.vdi" "H:\OS\ORACLE\Lab Oracle\asm_3.vdi" --format VDI --variant Fixed
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" clonemedium disk "C:\VM\RAC lab\london1\asm_4.vdi" "H:\OS\ORACLE\Lab Oracle\asm_4.vdi" --format VDI --variant Fixed
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" clonemedium disk "C:\VM\RAC lab\london1\asm_5.vdi" "H:\OS\ORACLE\Lab Oracle\asm_5.vdi" --format VDI --variant Fixed

  Note: You can also start Disk Copying Wizard which is a GUI equivalent to the above.

image

2. Export the VMs to ovf templates

     - List VMs

"C:\Program Files\Oracle\VirtualBox\VBoxManage.exelist vms
"london1" {45ae6298-7e95-4ef1-864d-85c995cb46ff}                             
"london2" {3c6a0e27-74f0-47ea-b3d7-ac9c253bb03b}                             

     - Export the VMs
    The OVA extension encapsulates all OVF folder content (.ovf,.mf*.vhd or *.vmdk) into a single zipped file

"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" export "london1" -o  "H:\OS\ORACLE\Lab Oracle\london1.ova"  --ovf10     
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" export "london2" -o  "H:\OS\ORACLE\Lab Oracle\london2.ova"  --ovf10                         

  Note: You can also start Export Appliance Wizard for each VM which is the GUI equivalent to the above commands.

image

3. Copy the shared disks and exported VMs to a new Host

image


4. Import both VMS

"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" import "D:\VM\Racattack\london1.ova"     --options keepallmacs 
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" import "D:\VM\Racattack\london2.ova"     --options keepallmacs   
Note: You can also start Import Appliance Wizard for VM which is the GUI equivalent to the above commands.
 *** Make sure your new computer’s VirtualBox Host Only Ethernet Adapter has the same IP segment than your Vms (example in my lab :192.168.78.1)

image

5. Change asm disks to shared

"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifymedium disk "D:\VM\Racattack\asm_1.vdi" –type shareable
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifymedium disk "D:\VM\Racattack\asm_2.vdi"  --type shareable "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifymedium disk "D:\VM\Racattack\asm_3.vdi"  --type shareable                          
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifymedium disk "D:\VM\Racattack\asm_4.vdi"  --type shareable                          
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifymedium disk "D:\VM\Racattack\asm_5.vdi"  --type shareable                          

   Note: You can also do the above through Virtual Media Manager  (GUI equivalent).

6. Attach the asm disks to both RAC VMs

"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" storageattach london1 --storagectl "SATA" --port 1  --device 0 --type hdd --medium "D:\VM\Racattack\asm_1.vdi" --mtype shareable  
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" storageattach london1 --storagectl "SATA" --port 2  --device 0 --type hdd --medium "D:\VM\Racattack\asm_2.vdi" --mtype shareable                                 
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" storageattach london1 --storagectl "SATA" --port 3  --device 0 --type hdd --medium "D:\VM\Racattack\asm_3.vdi" --mtype shareable
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" storageattach london1 --storagectl "SATA" --port 4  --device 0 --type hdd --medium "D:\VM\Racattack\asm_4.vdi" --mtype shareable
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" storageattach london1 --storagectl "SATA" --port 5  --device 0 --type hdd --medium "D:\VM\Racattack\asm_5.vdi" --mtype shareable
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" storageattach london2 --storagectl "SATA" --port 1  --device 0 --type hdd --medium "H:\OS\ORACLE\Lab Oracle\asm_1.vdi" --mtype shareable       
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" storageattach london2 --storagectl "SATA" --port 2  --device 0 --type hdd --medium "D:\VM\Racattack\asm_2.vdi" --mtype shareable      
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" storageattach london2 --storagectl "SATA" --port 3  --device 0 --type hdd --medium "D:\VM\Racattack\asm_3.vdi" --mtype shareable      
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" storageattach london2 --storagectl "SATA" --port 4  --device 0 --type hdd --medium "D:\VM\Racattack\asm_4.vdi" --mtype shareable      
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" storageattach london2 --storagectl "SATA" --port 5  --device 0 --type hdd --medium "D:\VM\Racattack\asm_5.vdi" --mtype shareable
        image

7. ReConfigure the asm disks in each RAC nodes

   Since udev Device Mapping configuration got somehow lost during the import we will have to redo that again.
    On each VM do the following:

7.1 Create partition from above added disks
   - As root user use fdisk command to partition the attached disks. Repeat the steps below for all the disks (sdb, sdc, sdd,
   sde,sdf). 

 [root@london1 ~]# fidsk /dev/sd*     
      >> n    -- new
      > p     -- primary
      > 1     -- 1 partition
      > w     -- write the change for all the disks
      [root@london1 ~]# ls -l /dev/sd?1
      brw-rw---- 1 root disk 8,  1 Jul 19 05:34 /dev/sda1
      brw-rw---- 1 root disk 8, 17 Jul 19 05:52 /dev/sdb1
      brw-rw---- 1 root disk 8, 33 Jul 19 05:53 /dev/sdc1
      brw-rw---- 1 root disk 8, 49 Jul 19 05:54 /dev/sdd1
      brw-rw---- 1 root disk 8, 65 Jul 19 05:54 /dev/sde1
      brw-rw---- 1 root disk 8, 65 Jul 19 05:54 /dev/sdf1   

   

7.2 Verify scsi_id configuration 

   - The content of /etc/scsi_id.config should include the option -g for the scsi_id command to expect an UUID from the
    shared devices. if not run the below in one line 

printf  "options=-g --whitelisted --replace-whitespace"  > /etc/scsi_id.config
   - Check if the the symbolic link for scsi_id is still there (RHEL 7 only)            
ln -s  '/usr/lib/udev/scsi_id'   '/sbin/scsi_id'

7.3 Rebuild the Udev rules in the /etc/udev/rules.d/99-oracle-asmdevices.rules file

  - Run the following script as root

i=1
cmd="/sbin/scsi_id -g -u -d"
for disk in sdb sdc sdd sde sdf; do
cat <<EOF >> /etc/udev/rules.d/99-oracle-asmdevices.rules KERNEL=="sd?1",SUBSYSTEM=="block", PROGRAM=="$cmd /dev/\$parent", \
RESULT=="`$cmd /dev/$disk`", SYMLINK+="asm-disk$i", OWNER="grid", GROUP="dba", MODE="0660"
EOF i=$(($i+1)) done
7.4 Reload the udev rules and restart udev:
# /sbin/partprobe /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1                                                                                                               
# /sbin/udevadm test /block/sdb/sdb1                              
# /sbin/udevadm test /block/sdc/sdc1                              
# /sbin/udevadm test /block/sdd/sdd1                              
# /sbin/udevadm test /block/sde/sde1                        
# /sbin/udevadm test /block/sde/sdf1     
# /sbin/udevadm control --reload-rules                                 
   - Check the generated sim links
[root@london1 ~]# ls -l /dev/asm-*                                   
brw-rw---- 1 grid dba 8, 17 Jul 19 07:28 /dev/asm-disk1           
brw-rw---- 1 grid dba 8, 33 Jul 19 07:28 /dev/asm-disk2           
brw-rw---- 1 grid dba 8, 49 Jul 19 07:28 /dev/asm-disk3           
brw-rw---- 1 grid dba 8, 65 Jul 19 07:28 /dev/asm-disk4 
brw-rw---- 1 grid dba 8, 65 Jul 19 07:28 /dev/asm-disk4

8. Restart The clusterware on both new VMs and voila   

image