Friday, July 10, 2020

How to Launch an OCI instance with Ansible roles (Always Free)...

Intro

Today, we will conclude our “Oracle Cloud automation" journey with deploying an instance using Ansible (roles). We will demonstrate how an Always-Free compute instance can be launched and accessed through SSH thanks to OCI Ansible Cloud Modules. This tutorial along with the previous Terraform and OCI-CLI shell labs will also help you prepare for the OCI Operations Associate exam (1Z0-1067). And as usual, playbooks used in this lab can be found in my GitHub repo.

NOTE >> : If you want to skip the concepts and get directly started with the lab just click on Ansible Setup.  
- Content :
Overview and Concepts
   I
. Ansible Setup
  II. Clone repository
 III. Deployment
 IV. Bummer

Overview and Concepts

                                                              Configure once deploy everywhere!…
Ansible
Is a deployment and configuration management platform that automates storage, servers, and networking.  It can be run on the cloud, with dedicated servers, or even locally on your own machine. Today’s focus will be on its cloud provisioning capabilities.

The OCI Ansible cloud modules
Is a project created by Oracle to help OCI users provision, configure and manage OCI infrastructure using Ansible.

What have I tweaked

I have taken a sample playbook and adapted it to use oci ansible roles instead of modules because roles are cooler :D.
You’ll see that installing roles from ansible-galaxy (like a yum repo for ansible) is way simpler than the cumbersome manual module install.


Topology

The following illustration shows the layers involved between your workstation an OCI while running Ansible playbooks .

This image has an empty alt attribute; its file name is image-4.png

Beside introducing my GitHub repo, I’d like to discuss some principles before starting this tutorial.

Terminology

Idempotence: This means that a change/action is not applied twice. Ansible validates if tasks are done before applying 
   them to avoid duplicating effort (example: run yum install twice).

Immutable: (i.e. terraform) No change is expected once our desired resource’s end state is deployed. If needed, the
  resource is just wiped out and recreated with new settings. No need to troubleshoot anymore (Vm, Load balancers).

Mutable: (i.e. Ansible) the resource is in constant mutation since deployment. We’ll mutate our stack ,
  modify it in place, to get into new desired configuration.

Ansible Features
- Human readable
- No special coding skills needed
- Tasks executed in order and defined in playbooks ( yaml files) and run across multi hosts
- Agentless: uses Open SSH/WINRM and no agent to deploy/manage
- Modules/Roles : Discrete units of code (large community library)
- Uses an Inventory of target resources to run ad-hock commands or Playbooks though ssh

Learn the hard way
- Unlike Terraform that is stateful, Ansible is stateless which means that it does not keep track of previous executions . 
- The code is applied in real-time from a control machine, but there is no easy solution to revoke the changes once they have
   started (half-baked deployment syndrome when it fails before the end).
- Yaml files are extremely sensitive in terms of indentation so beware.

Comparison
A little heads up on the main differences between automation solutions in OCIThis image has an empty alt attribute; its file name is image-3.png


  • Ansible resource definition syntax : Here an oci_vcn module that will create a vcn then print its OCID using set_fact
    • - name: Create a VCN
        oci_vcn:
          compartment_id: "{{ instance_compartment }}"
          display_name: "{{ vcn_name }}"
          cidr_block: "{{ vcn_cidr_block }}"
          dns_label: "{{ vcn_dns_label }}"
          config_profile_name: "{{config_profile}}" 
          register: result 
      - set_fact:
          vcn_id: "{{ result.vcn.id }}""


    I. Ansible setup                                                                                  Go to Top⭡

    Currently Ansible can be run from any machine with Python 2.7 or Python 3.5 (and higher) installed. This includes popular Linux distros (Red Hat, Debian, CentOS, BSD) and MacOS. Windows is not supported for the control node. Therefore, I will use windows Linux subsystem WSL as control node here.

      1. Install Ansible

      1- Install ansible
      $ RHEL:
      sudo yum install ansible
      $ ubuntu :
      sudo apt install python3-pip      ---> if oci isn't installed
      sudo apt-add-repository ppa:ansible/ansible
      sudo apt-get update
      sudo apt-get install ansible
      2- install sdk: sudo apt install python-pip --->[RHEL: yum install python-pip] $ pip install oci
    • Once installed, run the version command to validate your installation

      $ ansible --version
        ansible 2.9.7
        config file = /etc/ansible/ansible.cfg  

         2. Install ansible OCI roles from ansible galaxy

      As mentioned before, this deployment can only work if you install oci-ansible roles from ansible-galaxy
      since I adapted the playbooks to use roles instead of the original modules available in the Official GitHub Repo.

      --- automatic galaxy install ---
      default role path is the first writable directory configured in DEFAULT_ROLES_PATH:~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles

      $ ansible-galaxy install --roles-path=/etc/ansible/roles
      $ ansible-galaxy list - oracle.oci_ansible_modules, v1.18.0 PATH : ~/.ansible/roles/oracle.oci_ansible_modules

        3. Authentication with OCI

      - With Ansible and oci cloud modules(roles) installed our lab still requires an IAM user with its API signing key .
      - You’ll need to configure OCI-CLI so Ansible could authenticate to your OCI account via the config file (~/.oci/config)
      - Environment variables can still be defined before running your playbooks (i.e. SAMPLE_COMPARTMENT_OCID)


    II. Clone the repository                                                                       Go to Top⭡


    • Pick an area on your file system and issue the following command.
        $ git clone https://github.com/brokedba/ansible-examples.git
        $ cd ansible-examples/oci-ansible/launch_free_instance/
        $ tree . ├── a_env_vars ├── check_network.yml --- test playbook before to check the setup ├── check_shapes.yaml ├── sample.yaml --- main playbook that’ll create the vm ├── setup.yaml --- child playbook that’ll create the vcn etc ├── teardown.yaml --- destroy playbook └── templates ├── egress_security_rules.yaml.j2 --- egress security list template └── ingress_security_rules.yaml.j2 --- ingress security list template

      Repo content:

        • a_env_vars: Contains environment variables ( i.e. compartment OCID)
        • check_network.yml  : Quick playbook (get facts) to verify the setup  -- YAML
        • sample.yml and setup.yml : To deploy our instance  -- YAML
        • teardown.yml to destroy our lab                                  -- YAML
        • jinja templates will also help us load the egress/ingress NSG security lists (rules)  --- J2


      III. Deployment                                                                                      Go to Top⭡

      1. Environment variables and setup check

      2. Assign SAMPLE_COMPARTMENT_OCID variable in a_env_vars with your compartment OCID and source the file.

        Once done, run a test playbook that will generate an SSH keypair, and load security lists from our jinja templates.

        $ cd ansible-examples/oci-ansible/launch_free_instance/
        $ . a_env_vars
        -- Test your setup using
        $ ansible-playbook check_network.yml 
      3. launch the instance

      Let’s now launch our instance. You can click on the sample.yaml to see its content but it basically performs the below 

      • Sample.yaml will parse the declared variables then call setup.yaml that provisions the following
      • Generates a temporary host-specific SSH key-pair to be used to connect to the instance after the launch
      • Creates a necessary network(vcn, subnet..) and storage resources (volume) to be attached to the instance
      • Sample.yaml will then create and launch a new instance based on the above resources on Centos7(MircroCompute) 
      • I added config_profile_name: "{{config_profile}}" so you can switch to specific profile listed in your oci config file(~.oci/config.cfg)

      • Tasks are self explanatory. Tasks reports return OK when it’s just a set fact/lookup and CHANGED if something is really changed/created.


        $ ansible-playbook sample.yaml
            
        [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
        
        PLAY [Launch a compute instance and connect to it using SSH] *********************************************************************************
        
        TASK [Gathering Facts] *********************************************************************************
        ok: [localhost]
        
        TASK [Check pre-requisites] *********************************************************************************
        skipping: [localhost] => (item=SAMPLE_COMPARTMENT_OCID)
        
        TASK [List availbility domains] *********************************************************************************
        ok: [localhost]
        
        TASK [set_fact] *********************************************************************************
        ok: [localhost]
        
        TASK [print the availability domain] *********************************************************************************
        ok: [localhost] => {
            "msg": [{"compartment_id": "ocid1.tenancy.oc1..xxx",
                     "id": "ocid1.availabilitydomain.oc1..xxxx",
                     "name": "twts:CA-TORONTO-1-AD-1"} ]     }
        
        TASK [List images] *********************************************************************************
        ok: [localhost]
        
        TASK [set_fact] *********************************************************************************
        ok: [localhost]
        
        TASK [print img_id] *********************************************************************************
        ok: [localhost] => {"msg": "the name of the image is CentOS-7-2020.07.20-0"}
        
        TASK [List shapes in first AD] *********************************************************************************
        ok: [localhost]
        
        TASK [set_fact] *********************************************************************************
        ok: [localhost] => (item={u'memory_options': None, u'networking_bandwidth_in_gbps': 0.48, u'ocpus': 1.0, u'local_disks': 0, u'networking_bandwidth_options': None, u'shape': u'VM.Standard.E2.1.Micro', u'max_vnic_attachments': 1, u'ocpu_options': None, u'local_disks_total_size_in_gbs': None, u'gpu_description': None, u'memory_in_gbs': 1.0, u'gpus': 0, u'local_disk_description': None, u'max_vnic_attachment_options': None, u'processor_description': u'2.0 GHz AMD EPYC\u2122 7551 (Naples)'})
        
        TASK [List shapes in second AD] *********************************************************************************
        skipping: [localhost]
        
        TASK [set_fact] *********************************************************************************
        skipping: [localhost]
        
        TASK [List shapes in third AD] *********************************************************************************
        skipping: [localhost]
        
        TASK [set_fact] *********************************************************************************
        skipping: [localhost]
        
        TASK [Create a temp directory to house a temporary SSH keypair for the instance] ********************************************************************************
        changed: [localhost]
        
        TASK [set_fact] *********************************************************************************
        ok: [localhost]
        
        TASK [Generate a Private Key] *********************************************************************************
        changed: [localhost]
        
        TASK [set_fact] *********************************************************************************
        ok: [localhost]
        
        TASK [Generate a Public Key] *********************************************************************************
        changed: [localhost]
        
        TASK [Create a VCN] *********************************************************************************
        changed: [localhost]
        
        TASK [set_fact] *********************************************************************************
        ok: [localhost]
        
        TASK [Create a new Internet Gateway] *********************************************************************************
        changed: [localhost]
        
        TASK [set_fact] *********************************************************************************
        ok: [localhost]
        
        TASK [Create route table to connect internet gateway to the VCN] *********************************************************************************
        changed: [localhost]
        
        TASK [set_fact] *********************************************************************************
        ok: [localhost]
        
        TASK [create ingress rules yaml body] *********************************************************************************
        ok: [localhost]
        
        TASK [create egress yaml body] *********************************************************************************
        ok: [localhost]
        
        TASK [load the variables defined in the ingress rules yaml body] *********************************************************************************
        ok: [localhost]
        
        TASK [print loaded_ingress] *********************************************************************************
        ok: [localhost] => {
            "msg": "loaded ingress is {u'instance_ingress_security_rules': [{u'source': u'0.0.0.0/0', u'protocol': u'6', u'tcp_options': {u'destination_port_range': {u'max': 22, u'min': 22}}}, {u'source': u'0.0.0.0/0', u'protocol': u'6', u'tcp_options': {u'destination_port_range': {u'max': 80, u'min': 80}}}]}"
        }
        
        TASK [load the variables defined in the egress rules yaml body] *********************************************************************************
        ok: [localhost]
        
        TASK [print loaded_egress] *********************************************************************************
        ok: [localhost] => {
            "msg": "loaded egress is {u'instance_egress_security_rules': [{u'tcp_options': {u'destination_port_range': {u'max': 22, u'min': 22}}, u'destination': u'0.0.0.0/0', u'protocol': u'6'}]}"
        }
        
        TASK [Create a security list for allowing access to public instance] *********************************************************************************
        changed: [localhost]
        
        TASK [set_fact] *********************************************************************************
        ok: [localhost]
        
        TASK [Create a subnet to host the instance. Link security_list and route_table.] *********************************************************************************
        changed: [localhost]
        
        TASK [set_fact] *********************************************************************************
        ok: [localhost]
        
        TASK [Launch an instance]---->  ACTUAL VM SPIN *********************************************************************************
        changed: [localhost]
        
        TASK [Print instance details] *********************************************************************************
        ok: [localhost] => {
            "msg": "Launched a new instance {u'instance': {u'time_created': u'2020-08-10T04:39:54.207000+00:00', u'primary_public_ip': u'132.145.100.168', u'shape': u'VM.Standard.E2.1.Micro', u'ipxe_script': None, u'id': u'ocid1.instance.oc1.ca-toronto-1.xxx', u'agent_config': {u'is_monitoring_disabled': False, u'is_management_disabled': False}, u'fault_domain': u'FAULT-DOMAIN-1', u'extended_metadata': {}, u'time_maintenance_reboot_due': None, u'compartment_id': u'ocid1.tenancy.oc1..xxx', u'defined_tags': {u'Oracle-Tags': {u'CreatedOn': u'2020-08-10T04:39:53.283Z', u'CreatedBy': u'oracleidentitycloudservice/brokedba’}}, u'primary_private_ip': u'192.168.10.2', u'freeform_tags': {}, u'source_details': {u'source_type': u'image', u'image_id': u'ocid1.image.oc1.ca-toronto-1.xxxx', u'kms_key_id': None, u'boot_volume_size_in_gbs': None}, u'dedicated_vm_host_id': None, u'metadata': {u'ssh_authorized_keys': u'ssh-rsa xxxbcG5fPEwc+yUGN4nYXbTWgTeV'}, u'system_tags': {u'orcl-cloud': {u'free-tier-retained': u'true'}}, u'image_id': u'ocid1.image.oc1.ca-toronto-1.aaaaaaaaxxxx', u'availability_domain': u'twts:CA-TORONTO-1-AD-1', u'display_name': u'ansi_inst', u'lifecycle_state': u'RUNNING', u'shape_config': {u'networking_bandwidth_in_gbps': 0.48, u'ocpus': 1.0, u'local_disks': 0, u'max_vnic_attachments': 1, u'local_disks_total_size_in_gbs': None, u'gpu_description': None, u'memory_in_gbs': 1.0, u'gpus': 0, u'local_disk_description': None, u'processor_description': u'2.0 GHz AMD EPYC\\u2122 7551 (Naples)'}, u'region': u'ca-toronto-1', u'launch_options': {u'remote_data_volume_type': u'PARAVIRTUALIZED', u'firmware': u'UEFI_64', u'boot_volume_type': u'PARAVIRTUALIZED', u'is_consistent_volume_naming_enabled': True, u'network_type': u'PARAVIRTUALIZED', u'is_pv_encryption_in_transit_enabled': False}, u'launch_mode': u'PARAVIRTUALIZED'}, u'changed': True, 'failed': False"
        }
        
        TASK [set_fact] *********************************************************************************
        ok: [localhost]
        
        TASK [Create a volume] *********************************************************************************
        ok: [localhost]
        TASK [Print volume details] *********************************************************************************
        ok: [localhost] => {
            "msg": "Created a new volume {u'volume': {u'lifecycle_state': u'AVAILABLE', u'size_in_gbs': 50, u'display_name': u'ansi_vol', u'volume_group_id': None, u'compartment_id': u'ocid1.tenancy.oc1..axxxx', u'defined_tags': {}, u'system_tags': {u'orcl-cloud': {u'free-tier-retained': u'true'}}, u'kms_key_id': None, u'freeform_tags': {}, u'time_created': u'2020-05-25T02:03:21.633000+00:00', u'source_details': None, u'availability_domain': u'twts:CA-TORONTO-1-AD-1', u'size_in_mbs': 51200, u'is_hydrated': True, u'vpus_per_gb': 10, u'id': u'ocid1.volume.oc1.ca-toronto-1.ab2g6'}, 'failed': False, u'changed': False}"}
        
        TASK [set_fact] *********************************************************************************
        ok: [localhost]
        
        TASK [Attach volume to new instance] *********************************************************************************
        changed: [localhost]
        
        TASK [Print volume attachment details] *********************************************************************************
        ok: [localhost] => {"msg": "Attached volume to instance {'failed': False, u'changed': True, u'volume_attachment': {u'lifecycle_state': u'ATTACHED', u'availability_domain': u'twts:CA-TORONTO-1-AD-1', u'display_name': u'volumeattachment20200810044202', u'compartment_id': u'ocid1.tenancy.oc1..xxxx', u'iscsi_detach_commands': [], u'time_created': u'2020-08-10T04:42:02', u'id': u'ocid1.volumeattachment.oc1.ca-toronto-1.anxxx', u'instance_id': u'ocid1.instance.oc1.ca-toronto-1.anxxxx', u'is_read_only': False, u'volume_id': u'ocid1.volume.oc1.ca-toronto-1.ab2g6xxxx', u'device': None, u'is_shareable': False, u'attachment_type': u'paravirtualized', u'is_pv_encryption_in_transit_enabled': False, u'iscsi_attach_commands': []}}"}
        
        TASK [set_fact] *********************************************************************************
        ok: [localhost]
        TASK [Get the VNIC attachment details of instance] *********************************************************************************
        ok: [localhost]
        
        TASK [Get details of the VNIC] *********************************************************************************
        ok: [localhost]
        
        TASK [set_fact] *********************************************************************************
        ok: [localhost]
        
        TASK [Print the public ip of the newly launched instance] *********************************************************************************
        ok:[localhost]=> {"msg":"Public IP of launched instance 132.145.100.168"}
        
        TASK [Wait (upto 5 minutes) for port 22 to become open] *********************************************************************************
        ok: [localhost]
        
        TASK [Attempt a ssh connection to the newly launced instance] *********************************************************************************
        changed: [localhost]
        TASK [Print SSH response from launched instance] *********************************************************************************
        ok: [localhost] => {"msg": "SSH response from instance –> 
        [u'Linux ansi-compute 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux']"
        } PLAY RECAP ********************************************************************************* localhost:
        ok=46 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0
      • Let's login this time manually from our terminal, we will need to locate the created ssh keys folder from temp directory
      • $ ssh -i /tmp/ansible.VlmG3Dcert/id_rsa.pem opc@132.145.100.168
        [opc@ansi-compute ~]$ cat /etc/redhat-release
        CentOS Linux release 7.8.2003 (Core)       
        
      • If you want to destroy the instance right after creation just uncomment the teardown.yaml call at the end of the sample.yaml playbook.

      IV. Bummer                                                                                           Go to Top⭡


      • Even though Ansible provides ways to manage infrastructure and configuration in the cloud, it's statelessness makes it virtually impossible to rely on compared to Terraform.
      • For example, in Terraform you declare the desired end state and Terraform keeps track of what is available and what needs to be created in a statefile. Ansible on the other hand executes the tasks in sequence without keeping track of what has already been done.
      • To demonstrate the above point let’s try running our playbook another time.
    • LAUNCH THE INSTANCE FOR THE 2ND TIME
    • Surprise!!!…. right on,  it’ll create an exact copy of the previous instance (duplicates). Not a single warning when you run it.

        This image has an empty alt attribute; its file name is image-7.png

         CONCLUSION                                                                                                                                  

        • We proved in this lab that ansible can also deploy cloud resources (instance) but not without caveats.
        • Ansible's idempotence claims are a bit overblown as it relies on a direct ssh connection to the target host which is not managed In these modules. I have been told assert statements would fix this but it didn’t work for me.
        • The best practise from devops perspective at this point is to have Terraform launch(bootstrap) custom  images that are already mostly ready to go (~90%) and have the last 10% happen via cloud-init (bash, ansible-pull, etc).
        • However, If you want to test your service on a dedicated servers Ansible might be the better suited for the task.
        • That being said even Hashicorp and Redhat which respectively own both tools never ignored their potential complementarity as you can see in this joined presentation called "Ansible & terraform Better together" ;)

         EDIT                                                                                                                                  

        • "Dynamic inventory scripts" and "Dynamic inventory plugin" can be used with Ansible to more easily manage infrastructure (checks for the existence of a resource via tags before creating it). However, it didn't work for me and the configuration doc wasn't that clear (the playbook kept running like nothing changed).
        • If we want to stick to the distinct benefit for each of Tf & Ansible the below graphic is pretty fair .

          Image
        • Some also like executing #Ansible playbooks with Packer to prepare the custom image then spin it using terraform as Cloudinit script modification can be a source of undetected configuration drift.

        • Thanks for reading.  Next stop AWS                                                                                                                                      Go to Top⭡

      Saturday, July 4, 2020

      Terraform for dummies: Launch an instance with a static website on OCI

      Intro

      Terraform brings a new paradigm where Infrastructure becomes a Code, and with Cloud becoming what it is today, everyone is invited at the (devops) table. Therefore, after provisioning with oci-cli in my previous BlogPost, I will explore the same task  using terraform.To add more fun, we won’t just deploy an instance but also configure a website linked to its public IP.
       Note This lab will also help you practice if you are preparing for OCI Operations Associate exam(1Z0-1067) .

      Overview and Concepts

      Topology

      The following illustration shows the layers involved between your workstation an Oracle cloud infrastructure while running the terraform commands along with the instance attributes we will be provisioning .

      Besides describing my GitHub repo before starting this tutorial, I’ll just briefly discuss some principles.

    • Infrastructure As Code Manages and provisions cloud resources using a declarative code (i.e Terraform)  and definition files avoiding interactive configuration. Terraform is an immutable Orchestrator that creates and deletes all resources in the proper sequence.Each Cloud vendor has what we call a provider that terraform uses in order to convert declarative texts into API calls reaching the Cloud infrastructure layer.


    • Terraform Files
    • - Can be a single file or split into multiple tf or tf.json files, any other file extension is ignored
      - Files are merged in alphabetical order but resource definition order doesn't matter (subfolders are not read)
      - Common configurations have 3 type of tf files and a statefile
        1- main.tf : terraform declaration code (configuration)
        2- variables.tf : Resource variables needed for the deploy
        3- outputs.tf : displays the resources detail at the end of the deploy
        4- terraform.tfstate : keeps track of the state of the stack(resources) after each terraform apply run
    • Terraform resource declaration syntax looks like this:
    • Component "Provider_Resource_type" "MyResource_Name" { Attribute1 = value .. 
                                                             Attribute2 = value ..}

      Where the hell do I find a good deployment sample?
      The most important thing when learning a new program is accomplishing your first HelloWorld. Unfortunately, google can’t always  make the cut as samples I used had errors. Luckily, OCI Resource Manager had some samples I managed to export and tweak which was a good starting point for this lab.

      Terraform lab content: I have deliberately split this lab in 2 :


      I.Terraform setup

         Since I’m on windows I tried the lab using both Gitbash and WSL(Linux) terminal clients but the same applies to MAC .

        Windows:  Download and run the installer from their website (32-bit ,64-bit)

        Linux      :  Download, unzip and move the binary to the local bin directory

        $ wget https://releases.hashicorp.com/terraform/0.12.28/terraform_0.12.28_linux_amd64.zip
        $ unzip terraform_0.12.18_linux_amd64.zip
        $ mv terraform /usr/local/bin/
      • Once installed run the version command to validate your installation

        $ terraform --version
          Terraform v0.12.24
         OCI API Key based authentication

        API Key authentication requires that you provide the following OCI credentials:

        • Tenancy_ocid, Compartment_ocid, user_ocid and the region

        • The private API key path and its fingerprint to authenticate  with your tenancy account

        • The SSH key pair (Private/Public) required when launching the new compute instance

         Assumptions

        - Terraform shares most of the authentication parameters with oci-cli (located in  ~/.oci/config ). Please refer to my Other post for details on how to setup oci-cli if it isn’t done yet.

        - However, terraform also allows using environment variables to define these parameters. This is why I will be using a shell script that sets them before the deployment (I still needed oci-cli for API keys).

      II. Clone the repository



      III. Provider setup

      1. INSTALL AND SETUP THE OCI PROVIDER

        • Cd Into the subdirectory terraform-provider-oci/create-vcn where our configuration resides (i.e vcn )
          $ cd /c/Users/brokedba/oci/terraform-examples/terraform-provider-oci/create-vcn
        • OCI provider plugin is distributed by HashiCorp hence it will be automatically installed by terraform init.
        • $ terraform init
            Initializing the backend...
          
            Initializing provider plugins...
            - Checking for available provider plugins...
            - Downloading plugin for provider "oci" (hashicorp/oci) 3.83.1...
            * provider.oci: version = "~> 3.83"
          
          $ terraform --version
            Terraform v0.12.24
            + provider.oci v3.83.1   ---> the privider is now installed
            
        • Let's see what's in the create-vcn directory. Here, only *.tf files matter along with env-vars (click to see content)
        • $ tree
            .
            |-- env-vars          ---> TF_environment_variables needed to authenticate to OCI 
            |-- outputs.tf        ---> displays the resources detail at the end of the deploy
            |-- schema.yaml       ---> Contains the stack (variables) description    
            |-- variables.tf      ---> Resource variables needed for the deploy   
            `-- vcn.tf            ---> Our vcn terraform declaration code (configuration)        
          
        • Adjust the required authentication parameters in env-vars file according to your tenancy and key pairs (API/SSH).
        • $ vi env-vars 
          
            export TF_VAR_tenancy_ocid="ocid1.tenancy.oc1..aaaaaaaa"             # change me 
            export TF_VAR_user_ocid="ocid1.user.oc1..aaaaaaaa"                   # change me 
            export TF_VAR_compartment_ocid="ocid1.tenancy.oc1..aaaaaaaa"         # change me 
            export TF_VAR_fingerprint=$(cat PATH_To_Fing/oci_api_key_fingerprint)# change me 
            export TF_VAR_private_key_path=PATH_To_APIKEY/oci_api_key.pem        # change me 
            export TF_VAR_ssh_public_key=$(cat PATH_To_PublicSSH/id_rsa.pub)     # change me 
            export TF_VAR_ssh_private_key=$(cat PATH_To_PrivateSSH/id_rsa)       # change me 
            export TF_VAR_region="ca-toronto-1"                                  # change me 
            $ . env-vars

        IV. Partial Deployment

          DEPLOY A SIMPLE VCN

            • Now that env-vars values are set and sourced, we can run terraform plan command to create an execution plan (quick dry run to check the desired state/actions )
              $ terraform plan
                 Refreshing Terraform state in-memory prior to plan... 
                ------------------------------------------------------------------------
                An execution plan has been generated and is shown below.
                  Terraform will perform the following actions:
              
                  # oci_core_default_route_table.rt will be created
                  + resource "oci_core_default_route_table" "rt" 
                  {..}
                  # oci_core_internet_gateway.gtw will be created
                  + resource "oci_core_internet_gateway" "gtw" 
                  {..}
                     
                  # oci_core_security_list.terra_sl will be created
                  + resource "oci_core_security_list" "terra_sl" {
                      + egress_security_rules {..}
                      + ingress_security_rules {..
                          + tcp_options {+ max = 22 + min = 22}}
                      + ingress_security_rules {..
                           + tcp_options { + max = 80 + min = 80}}
                   }
              
                  # oci_core_subnet.terrasub[0] will be created
                  + resource "oci_core_subnet" "terrasub" {
                      + availability_domain        = "BahF:CA-TORONTO-1-AD-1"
                      + cidr_block                 = "192.168.78.0/24"
                      ...}
              
                  # oci_core_vcn.vcnterra will be created
                  + resource "oci_core_vcn" "vcnterra" {
                      + cidr_block               = "192.168.64.0/20"
                      ...}
              
                Plan: 5 to add, 0 to change, 0 to destroy.
              

              - The output being too verbose I deliberately kept only relevant attributes for each VCN component
                  
            • Next, we can finally run terraform deploy to apply the changes required to create our VCN ( listed in the plan )
            • $ terraform apply -auto-approve
              oci_core_vcn.vcnterra: Creating...
              ...
              Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
              
              Outputs:
              
              default_dhcp_options_id = ocid1.dhcpoptions.oc1.ca-toronto-1.aaaaaaaaasxxxx
              default_route_table_id = ocid1.routetable.oc1.ca-toronto-1.aaaaaaaaaxxx
              default_security_list_id = ocid1.securitylist.oc1.ca-toronto-1.aaaaaaaaxx
              internet_gateway_id = ocid1.internetgateway.oc1.ca-toronto-1.aaaaaaaaxxxx
              subnet_ids = ["ocid1.subnet.oc1.ca-toronto-1.aaaaaaaaxxx,]
              vcn_id = ocid1.vcn.oc1.ca-toronto-1.amaaaaaaaxxx
               

            Observations :

            - The deploy started by loading the resources variables in variables.tf which allowed the execution of vcn.tf
            - Finally terraform fetched the variables (ocids) of the resources listed in outputs.tf (lookup)

            Note : In order to continue the lab we will need to destroy the vcn as the full instance launch will recreate it.

              $ terraform destroy -auto-approve
              
              Destroy complete! Resources: 5 destroyed.
              


          V. Full deployment (Instance)

          1. OVERVIEW

            • Awesome, After our small test let's launch a full instance from scratch .
            • First we need to switch to the second directory terraform-provider-oci/launch-instance/
              Here's its content:
            • $ tree ./terraform-provider-oci/launch-instance
              .
              |-- cloud-init           ---> SubFolder
              |   `--> vm.cloud-config ---> script to install a web server & add a Webpage at startup
              |-- compute.tf    ---> Instance related terraform configuration
              |-- env-vars      ---> authentication envirment variables
              |-- outputs.tf    ---> displays the resources detail at the end of the deploy
              |-- schema.yaml   ---> Containes the stack (variables)
              |-- variables.tf  ---> Resource variables needed for the deploy   
              |-- vcn.tf        ---> same vcn terraform declaration
              

              Note: As you can see we have 2 additional files and one Subfolder.
              compute.tf is where the compute instance and all its attributes are declared. All the other tf files come from my vcn example with some additions for variables.tf and output.tf

            • Cloud-init : is a cloud instance initialization method that executes tasks upon instance startup by providing the user_data entry in the metadata block of the Terraform oci_core_instance resource definition (See below).
              $ vi compute.tf
              resource "oci_core_instance" "terra_inst" {
              ...
              metadata = {
                ssh_authorized_keys = file("../../.ssh/id_rsa.pub") ---> Upload sshkey
                user_data = base64encode(file("./cloud-init/vm.cloud-config")) ---> Run tasks 
                    }      
              ...
            • In my lab, I used cloud-init to install nginx and write an html page that will be the server's HomePage at startup.
          2. LAUNCH THE INSTANCE

            • Once in the launch-instance directory make sure you copied the adjusted env-vars file and sourced it (see III. Provider setup). You can then run the plan command (output is truncated for more visibility)
            • $ terraform plan
                 Refreshing Terraform state in-memory prior to plan... 
                ------------------------------------------------------------------------
                An execution plan has been generated and is shown below.
                  Terraform will perform the following actions:
              
                ... # VCN declaration 
                # oci_core_instance.terra_inst will be created
                + resource "oci_core_instance" " terra_inst" {
                    + ...
                    + defined_tags                        = (known after apply)
                    + display_name                        = "TerraCompute"
                    + metadata                            = {
                       + "ssh_authorized_keys" =...
                       + "user_data"           = " ...
                    + shape                               = "VM.Standard.E2.1.Micro"
                    + ...
                    + create_vnic_details {
                    + hostname_label         = "terrahost"
                    + private_ip             = "192.168.78.51"
                    ..}
                    + source_details {
                        + boot_volume_size_in_gbs = "50"
                        + source_type             = "image"
                         ..}
                 # oci_core_volume.terra_vol will be created
                 + resource "oci_core_volume" "terra_vol" {..}
                 # oci_core_volume_attachment.terra_attach will be created
                 + resource "oci_core_volume_attachment" "terra_attach" {..}
                 ...
                Plan: 8 to add, 0 to change, 0 to destroy.
              
            • Now let the cloud party begin and provision our instance ( output has been truncated for more visibility)
            • $ terraform apply -auto-approve
              ...
              oci_core_instance.terra_inst: Creation complete after 1m46s
              oci_core_volume.terra_vol: Creation complete after 14s  
              oci_core_volume_attachment.terra_attach: Creation complete after 33s  
              ...
              Apply complete! Resources: 8 added, 0 changed, 0 destroyed.
              
              Outputs:
              ...
              private_ip = [ "192.168.78.51",]
              public_ip  = [ "132.145.108.51",]
              


          3. CONNECTION TO YOUR INSTANCE WEB PAGE 

          Your Web Page is 🔥 :)

            • Once the instance is provisioned, juts copy the public IP address(132.145.108.51) in your browser and Voila!
            • You have made yourself a fresh and shiny website ready to roll.      
            • Here I just embedded a video link into the index page but you can adapt the cloud-config to your own liking
            • You can also tear down this configuration by simply running terraform destroy from the same directory 

             CONCLUSION

            • We have demonstrated in this tutorial how to quickly deploy an instance using terraform in OCI and leverage Cloud-init to bootstrap your instance into a webserver .
            • Remember that all used attributes in this exercise can be modified in the variables.tf file.
            • In my next blog post we will explore how to provision instances using oci ansible modules.