A quick one to say that I am very happy to be a new member of the OakTable Network.
Thanks Kevin Closson for my nomination, I feel deeply honored!
Sharing stuff (by Bertrand Drouvot)
A quick one to say that I am very happy to be a new member of the OakTable Network.
Thanks Kevin Closson for my nomination, I feel deeply honored!
As you may have noticed, Oracle has released a few weeks ago Docker build files for the Oracle Database on Github and an associated blog post. Should you read the blog post, you would notice that you need to download manually the oracle binaries.
The purpose of this post is to provide a modified version of the original build files (focusing on the 12.1.0.2 database EE only), so that the oracle binaries are downloaded during the build of the image (thanks to Maris Elsins’s getMOSPatch script).
The manual steps are:
root@docker# yum install docker-engine
root@docker# docker daemon --storage-opt dm.basesize=20G
root@docker# git clone https://github.com/bdrouvot/oracledb-docker.git
root@docker# docker build --force-rm=true --no-cache=true -t oracle/database:12.1.0.2 .
That’s it, now we can:
Simply start a docker container that way:
root@docker# docker run -p 1521:1521 --name bdt12c oracle/database:12.1.0.2
The host that is running the docker engine is “docker”. You can connect to the database as you would do normally, for example, using Oracle SQL Developper:
Note that the Hostname is “docker”, that is to say the one that is hosting the docker engine.
Auditing the oracle database may lead to a wide variety of information.
What about having all this information centralized? What about having the possibility to gather, format, search, analyze and visualize this information in near real time?
To achieve this, let’s use the ELK stack:
We’ll focus on the audit information coming from:
You should first read this post: Push the oracle alert.log and listener.log into Elasticsearch and analyze/visualize their content with Kibana prior to this one. The reason is that the current post relies on it (as the current post gives less explanation about Installation, setup and so on).
The Installation of those 3 layers is the same as described into this blog post.
To achieve this we’ll use the logstash’s JDBC input (Robin Moffatt provided an interesting use case and explanation of the logstash’s JDBC input into this blog post) so that:
To trap and format this information, let’s create an audit_database.conf configuration file that looks like:
input { jdbc { jdbc_validate_connection => true jdbc_connection_string => "jdbc:oracle:thin:@localhost:1521/PBDT" jdbc_user => "system" jdbc_password => "bdtbdt" jdbc_driver_library => "/u01/app/oracle/product/11.2.0/dbhome_1/jdbc/lib/ojdbc6.jar" jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver" statement => "select os_username,username,userhost,timestamp,action_name,comment_text,sessionid,returncode,priv_used,global_uid from dba_audit_trail where timestamp > :sql_l ast_value" schedule => "*/2 * * * *" } } filter { # Set the timestamp to the one of dba_audit_trail mutate { convert => [ "timestamp" , "string" ]} date { match => ["timestamp", "ISO8601"]} if [comment_text] =~ /(?i)Authenticated by/ { grok { match => [ "comment_text","^.*(?i)Authenticated by: (?<authenticated_by>.*?)\;.*$" ] } if [comment_text] =~ /(?i)EXTERNAL NAME/ { grok { match => [ "comment_text","^.*(?i)EXTERNAL NAME: (?<external_name>.*?)\;.*$" ] } } } # remove temporary fields mutate { remove_field => ["timestamp"] } } output { elasticsearch { hosts => ["elk:9200"] index => "audit_databases_oracle-%{+YYYY.MM.dd}" } }
so that an entry into dba_audit_trail like:
will be formatted and send to elasticsearch that way:
{ "os_username" => "bdt", "username" => "ORG_USER", "userhost" => "bdts-MacBook-Pro.local", "action_name" => "LOGON", "comment_text" => "Authenticated by: DIRECTORY PASSWORD;EXTERNAL NAME: cn=bdt_dba,cn=users,dc=bdt,dc=com; Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.1)(PORT=49515))", "sessionid" => 171615.0, "returncode" => 0.0, "priv_used" => "CREATE SESSION", "global_uid" => "4773e70a9c6f4316be03169d8a06ecab", "@version" => "1", "@timestamp" => "2016-07-01T06:47:51.000Z", "authenticated_by" => "DIRECTORY PASSWORD", "external_name" => "cn=bdt_dba,cn=users,dc=bdt,dc=com" }
So that:
To trap and format this information, let’s create an audit_files.conf configuration file that looks like:
input { file { path => "/u01/app/oracle/admin/PBDT/adump/*.aud" } } filter { # Join lines based on the time multiline { pattern => "%{DAY} %{MONTH} *%{MONTHDAY} %{TIME} %{YEAR}.*" negate => true what => "previous" } # Extract the date and the rest from the message grok { match => [ "message","%{DAY:day} %{MONTH:month} *%{MONTHDAY:monthday} %{TIME:time} %{YEAR:year}(?<audit_message>.*$)" ] } grok { match => [ "audit_message","^.*ACTION :\[[0-9]*\] (?<action>.*?)DATABASE USER:\[[0-9]*\] (?<database_user>.*?)PRIVILEGE :\[[0-9]*\] (?<privilege>.*?)CLIENT USER:\[[0-9]*\] (?<cl ient_user>.*?)CLIENT TERMINAL:\[[0-9]*\] (?<client_terminal>.*?)STATUS:\[[0-9]*\] (?<status>.*?)DBID:\[[0-9]*\] (?<dbid>.*$?)" ] } if "_grokparsefailure" in [tags] { drop {} } mutate { add_field => { "timestamp" => "%{year} %{month} %{monthday} %{time}" } } # replace the timestamp by the one coming from the audit file date { locale => "en" match => [ "timestamp" , "yyyy MMM dd HH:mm:ss" ] } # remove temporary fields mutate { remove_field => ["audit_message","day","month","monthday","time","year","timestamp"] } } output { elasticsearch { hosts => ["elk:9200"] index => "audit_databases_oracle-%{+YYYY.MM.dd}" } }
so that an audit file content like:
Fri Jul 1 07:13:56 2016 +02:00 LENGTH : '160' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[5] 'pts/1' STATUS:[1] '0' DBID:[10] '3270644858'
will be formatted and send to elasticsearch that way:
{ "message" => "Fri Jul 1 07:13:56 2016 +02:00\nLENGTH : '160'\nACTION :[7] 'CONNECT'\nDATABASE USER:[1] '/'\nPRIVILEGE :[6] 'SYSDBA'\nCLIENT USER:[6] 'oracle'\nCLIENT TERMINAL:[5] 'pts/1'\nSTATUS:[1] '0'\nDBID:[10] '3270644858'\n", "@version" => "1", "@timestamp" => "2016-07-01T07:13:56.000Z", "path" => "/u01/app/oracle/admin/PBDT/adump/PBDT_ora_2387_20160701071356285876143795.aud", "host" => "Dprima", "tags" => [ [0] "multiline" ], "action" => "'CONNECT'\n", "database_user" => "'/'\n", "privilege" => "'SYSDBA'\n", "client_user" => "'oracle'\n", "client_terminal" => "'pts/1'\n", "status" => "'0'\n", "dbid" => "'3270644858'\n" }
The Kibana configuration has already been described into this blog post.
Let’s see 2 examples of audit data visualisation:
As we can see most of the connections are authenticated by Directory Password and are successful.
As we can see, some of the sysdba connections are not successful between 07:57 am and 7:58 am. Furthermore the number of unsuccessful connections is greater than the number of successful ones.
Thanks to the ELK stack you can gather, centralize, analyze and visualize the oracle audit data for your whole datacenter the way you want to.
Oracle Unified Directory (OUD) can be used to centrally manage database users across the enterprise.
It allows us to manage roles and privileges across various databases registered with the directory.
Users connect to the database by providing credentials that are stored in Oracle Unified Directory, then the database executes LDAP search operations to query user specific authentication and authorization information.
This post does not cover the OUD installation.
So, once the OUD has been installed the remaining steps are:
Let’s setup.
>$ cat $ORACLE_HOME/network/admin/ldap.ora DIRECTORY_SERVERS=(oud:1389:1636) DEFAULT_ADMIN_CONTEXT="dc=bdt,dc=com" DIRECTORY_SERVER_TYPE=OID >$ cat register_database_oud.ksh dbca -silent -configureDatabase -sourceDB PBDT -registerWithDirService true -dirServiceUserName "cn=orcladmin" -dirServicePassword "bdtbdt" -walletPassword "monster123#" >$ ksh ./register_database_oud.ksh Preparing to Configure Database 6% complete 13% complete 66% complete Completing Database Configuration 100% complete Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/PBDT/PBDT20.log" for further details.
SQL> !cat prepare_oud_users.sql create role org_dba identified globally; grant dba to org_dba; create role org_connect identified globally; grant create session to org_connect; create user org_user identified globally; SQL> @prepare_oud_users.sql Role created. Grant succeeded. Role created. Grant succeeded. User created.
As you can see:
As an example, a ldif file has been created to:
>$ cat newgroupsusers.ldif dn: cn=groups,dc=bdt,dc=com changetype: add objectclass: top objectclass: groupOfNames cn: groups dn: cn=users,dc=bdt,dc=com changetype: add objectclass: top objectclass: groupOfNames cn: users dn: cn=bdt_connect,cn=users,dc=bdt,dc=com changetype: add objectclass: top objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson cn: bdt_connect sn: bdt_connect uid: bdt_connect userpassword: bdtconnect dn: cn=CONNECT_GROUP,cn=groups,dc=bdt,dc=com changetype: add objectclass: top objectclass: groupOfNames cn: CONNECT_GROUP member: cn=bdt_connect,cn=users,dc=bdt,dc=com dn: cn=bdt_dba,cn=users,dc=bdt,dc=com changetype: add objectclass: top objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson cn: bdt_dba sn: bdt_dba uid: bdt_dba userpassword: bdtdba dn: cn=DBA_GROUP,cn=groups,dc=bdt,dc=com changetype: add objectclass: top objectclass: groupOfNames cn: DBA_GROUP member: cn=bdt_dba,cn=users,dc=bdt,dc=com
and then launch:
$> cat create_ldap_groups_users.ksh ldapadd -h oud -p 1389 -D "cn=orcladmin" -w bdtbdt -f ./newgroupsusers.ldif -v $> ksh ./create_ldap_groups_users.ksh
So that the users and groups have been created into the OUD.
A graphical view of what has been done into the OUD (thanks to the Apache Directory Studio) is:
>$ ksh ./mapdb_ldap.ksh + echo 'Mapping User' Mapping User + eusm createMapping database_name=PBDT realm_dn='dc=bdt,dc=com' map_type=SUBTREE map_dn='cn=users,dc=bdt,dc=com' schema=ORG_USER ldap_host=oud ldap_port=1389 ldap_user_dn='cn=orcladmin' ldap_user_password=bdtbdt + echo 'Create Enterprise role' Create Enterprise role + eusm createRole enterprise_role=PBDT_dba_role domain_name=OracleDefaultDomain realm_dn='dc=bdt,dc=com' ldap_host=oud ldap_port=1389 ldap_user_dn='cn=orcladmin' ldap_user_password=bdtbdt + eusm createRole enterprise_role=PBDT_connect_role domain_name=OracleDefaultDomain realm_dn='dc=bdt,dc=com' ldap_host=oud ldap_port=1389 ldap_user_dn='cn=orcladmin' ldap_user_password=bdtbdt + echo 'Link Roles' Link Roles + eusm addGlobalRole enterprise_role=PBDT_dba_role domain_name=OracleDefaultDomain realm_dn='dc=bdt,dc=com' database_name=PBDT global_role=ORG_DBA dbuser=system dbuser_password=bdtbdt dbconnect_string=dprima:1521:PBDT ldap_host=oud ldap_port=1389 ldap_user_dn='cn=orcladmin' ldap_user_password=bdtbdt + eusm addGlobalRole enterprise_role=PBDT_connect_role domain_name=OracleDefaultDomain realm_dn='dc=bdt,dc=com' database_name=PBDT global_role=ORG_CONNECT dbuser=system dbuser_password=bdtbdt dbconnect_string=dprima:1521:PBDT ldap_host=oud ldap_port=1389 ldap_user_dn='cn=orcladmin' ldap_user_password=bdtbdt + echo 'Grant Roles' Grant Roles + eusm grantRole enterprise_role=PBDT_dba_role domain_name=OracleDefaultDomain realm_dn='dc=bdt,dc=com' group_dn='cn=DBA_GROUP,cn=groups,dc=bdt,dc=com' ldap_host=oud ldap_port=1389 ldap_user_dn='cn=orcladmin' ldap_user_password=bdtbdt + eusm grantRole enterprise_role=PBDT_connect_role domain_name=OracleDefaultDomain realm_dn='dc=bdt,dc=com' group_dn='cn=CONNECT_GROUP,cn=groups,dc=bdt,dc=com' ldap_host=oud ldap_port=1389 ldap_user_dn='cn=orcladmin' ldap_user_password=bdtbdt
So that, for this database only, there is a mapping between:
You can view the result into this video:
As you can see:
Frank Van Bortel already covered this subject into this blog post.
The oracle alert.log and listener.log contain useful information to provide answer to questions like:
What about having all this information centralized? What about having the possibility to gather, format, search, analyze and visualize this information in real time?
To achieve this, let’s use the ELK stack:
The installation is very simple.
[root@elk ~]# wget https://download.elasticsearch.org/elasticsearch/release/org/elasticsearch/distribution/rpm/elasticsearch/2.2.1/elasticsearch-2.2.1.rpm [root@elk ~]# yum localinstall elasticsearch-2.2.1.rpm
[root@elk ~]# grep network.host /etc/elasticsearch/elasticsearch.yml network.host: elk
[root@elk ~]# /etc/init.d/elasticsearch start Starting elasticsearch: [ OK ]
[root@elk ~]# wget https://download.elastic.co/kibana/kibana/kibana-4.4.2-linux-x64.tar.gz [root@elk ~]# tar -xf kibana-4.4.2-linux-x64.tar.gz --directory /opt [root@elk ~]# mv /opt/kibana-4.4.2-linux-x64 /opt/kibana
[root@elk ~]# grep elasticsearch.url /opt/kibana/config/kibana.yml elasticsearch.url: "http://elk:9200"
[root@elk ~]# /opt/kibana/bin/kibana
[root@dprima ~]# wget https://download.elastic.co/logstash/logstash/packages/centos/logstash-2.2.2-1.noarch.rpm [root@dprima ~]# yum localinstall logstash-2.2.2-1.noarch.rpm
So that:
New fields are being created so that we can analyze/visualize them later on with Kibana.
input { file { path => "/u01/app/oracle/diag/rdbms/pbdt/PBDT/trace/alert_PBDT.log" } } filter { # Join lines based on the time multiline { pattern => "%{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{YEAR}" negate => true what => "previous" } # Create new field: oradb_status: starting,running,shutdown if [message] =~ /Starting ORACLE instance/ { mutate { add_field => [ "oradb_status", "starting" ] } } else if [message] =~ /Instance shutdown complete/ { mutate { add_field => [ "oradb_status", "shutdown" ] } } else { mutate { add_field => [ "oradb_status", "running" ] } } # Search for ORA- and create field if match if [message] =~ /ORA-/ { grok { match => [ "message","(?<ORA->ORA-[0-9]*)" ] } } # Extract the date and the rest from the message grok { match => [ "message","%{DAY:day} %{MONTH:month} %{MONTHDAY:monthday} %{TIME:time} %{YEAR:year}(?<log_message>.*$)" ] } mutate { add_field => { "timestamp" => "%{year} %{month} %{monthday} %{time}" } } # replace the timestamp by the one coming from the alert.log date { locale => "en" match => [ "timestamp" , "yyyy MMM dd HH:mm:ss" ] } # replace the message (remove the date) mutate { replace => [ "message", "%{log_message}" ] } mutate { remove_field => [ "time" ,"month","monthday","year","timestamp","day","log_message"] } } output { elasticsearch { hosts => ["elk:9200"] index => "oracle-%{+YYYY.MM.dd}" } }
[root@dprima ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/alert_log.conf
Sat Mar 26 08:30:26 2016 ORA-1653: unable to extend table SYS.BDT by 8 in tablespace BDT
will be formatted and send to elasticsearch that way:
{ "message" => "\nORA-1653: unable to extend table SYS.BDT by 8 in tablespace BDT ", "@version" => "1", "@timestamp" => "2016-03-26T08:30:26.000Z", "path" => "/u01/app/oracle/diag/rdbms/pbdt/PBDT/trace/alert_PBDT.log", "host" => "Dprima", "tags" => [ [0] "multiline" ], "oradb_status" => "running", "ORA-" => "ORA-1653" }
So that:
New fields are being created so that we can analyze/visualize them later on with Kibana.
input { file { path => "/u01/app/oracle/diag/tnslsnr/Dprima/listener/trace/listener.log" } } filter { if [message] =~ /(?i)CONNECT_DATA/ { # Extract the date and the rest from the message grok { match => [ "message","(?<the_date>.*%{TIME})(?<lsnr_message>.*$)" ] } # Extract COMMAND (like status,reload,stop) and add a field if [message] =~ /(?i)COMMAND=/ { grok { match => [ "lsnr_message","^.*(?i)COMMAND=(?<command>.*?)\).*$" ] } } else { # Extract useful Info (USER,PROGRAM,IPCLIENT) and add fields grok { match => [ "lsnr_message","^.*PROGRAM=(?<program>.*?)\).*USER=(?<user>.*?)\).*ADDRESS.*HOST=(?<ip_client>%{IP}).*$" ] } } # replace the timestamp by the one coming from the listener.log date { locale => "en" match => [ "the_date" , "dd-MMM-yyyy HH:mm:ss" ] } # replace the message (remove the date) mutate { replace => [ "message", "%{lsnr_message}" ] } # remove temporary fields mutate { remove_field => [ "the_date","lsnr_message"] } # search for SID or SERVICE_NAME, collect dest and add dest type if [message] =~ /(?i)SID=/ { grok { match => [ "message","^.*(?i)SID=(?<dest>.*?)\).*$" ] } mutate { add_field => [ "dest_type", "SID" ] } } if [message] =~ /(?i)SERVICE_NAME=/ { grok { match => [ "message","^.*(?i)SERVICE_NAME=(?<dest>.*?)\).*$" ] } mutate { add_field => [ "dest_type", "SERVICE" ] } } } else { drop {} } } output { elasticsearch { hosts => ["elk:9200"] index => "oracle-%{+YYYY.MM.dd}" } }
[root@Dprima conf.d]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/lsnr_log.conf
26-MAR-2016 08:34:57 * (CONNECT_DATA=(SID=PBDT)(CID=(PROGRAM=SQL Developer)(HOST=__jdbc__)(USER=bdt))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.1)(PORT=50379)) * establish * PBDT * 0
will be formatted and send to elasticsearch that way:
{ "message" => " * (CONNECT_DATA=(SID=PBDT)(CID=(PROGRAM=SQL Developer)(HOST=__jdbc__)(USER=bdt))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.1)(PORT=50380)) * establish * PBDT * 0", "@version" => "1", "@timestamp" => "2016-03-26T08:34:57.000Z", "path" => "/u01/app/oracle/diag/tnslsnr/Dprima/listener/trace/listener.log", "host" => "Dprima", "program" => "SQL Developer", "user" => "bdt", "ip_client" => "192.168.56.1", "dest" => "PBDT", "dest_type" => "SID" }
so that we can now visualize them.
curl -XDELETE elk:9200/oracle* curl -XPUT elk:9200/_template/oracle_template -d ' { "template" : "oracle*", "settings" : { "number_of_shards" : 1 }, "mappings" : { "oracle" : { "properties" : { "program" : {"type" : "string", "index": "not_analyzed" }, "ORA-" : {"type" : "string", "index": "not_analyzed" } } } } }'
Thanks to the ELK stack you can gather, centralize, analyze and visualize the content of the alert.log and listener.log files for your whole datacenter the way you want to. This information is a gold mine and the imagination is the only limit.
On April 1 2015, I’ll start working for the Accenture Enkitec Group. No, this is not a joke for the Fool’s day and I can’t still believe it. Why?, because:
I spent the last 3 years working at the European Commission (Data Center). I really enjoyed this job. But as you understood, working for the Accenture Enkitec Group is something that I can’t refuse!
This post is not about self-promoting (I am not a big fan of it) me or the company, it is just a dump of my mind and feelings.
Looking forward to start this new challenge!
Last week I wanted to create a new oracle virtual machine on my Laptop and I discovered that the disk space on my SSD device was more or less exhausted. Then I looked for a solution to minimize the disk usage of my oracle virtual machines.
I find a way to achieve my need based on those technologies:
In this post I will show how I create an oracle environment on my laptop with those technologies and how we can clone a container, an ORACLE_HOME and a database in a few seconds with initially no additional disk space.
Note: The goal is to create a “test” environment on your laptop. I would not suggest to follow this installation process on a “real” system 😉
PREPARATION PHASE
Step 1: let’s create a OEL 6.5 virtual machine (named lxc) using virtualbox. This virtual machine will host our Linux containers, oracle software and databases.
Step 2: Install lxc and btrfs into the virtual machine created into step 1.
[root@lxc ~]# yum install btrfs-progs [root@lxc ~]# yum install lxc [root@lxc ~]# service cgconfig start [root@lxc ~]# chkconfig cgconfig on [root@lxc ~]# service libvirtd start [root@lxc ~]# chkconfig libvirtd on
Step 3: Add a btrfs file system into the virtual machine (This file system will receive the oracle software and databases). To do so, add a disk to your virtualbox machine created in step 1, start the machine and launch the fs creation:
[root@lxc ~]# mkfs.btrfs /dev/sdb [root@lxc ~]# mkdir /btrfs [root@lxc ~]# mount /dev/sdb /btrfs [root@lxc ~]# chown oracle:dba /btrfs [root@lxc ~]# blkid /dev/sdb /dev/sdb: UUID="3f6f7b51-7662-4d81-9a29-195e167e54ff" UUID_SUB="1d79e0d0-933d-4c65-9939-9614375da5e1" TYPE="btrfs" Retrieve the UUID and put it into the fstab [root@lxc ~]# cat >> /etc/fstab << EOF UUID=3f6f7b51-7662-4d81-9a29-195e167e54ff /btrfs btrfs defaults 0 0 EOF
Step 4: Add a btrfs file system into the virtual machine (This file system will receive the linux containers). To do so, add a disk to your virtualbox machine created in step 1, start the machine and launch the fs creation:
[root@lxc ~]# mkfs.btrfs /dev/sdc [root@lxc ~]# mkdir /container [root@lxc ~]# mount /dev/sdc /container [root@lxc ~]# blkid /dev/sdc /dev/sdc: UUID="8a565bfd-2deb-4d02-bd91-a81c4cc9eb54" UUID_SUB="44cb0a14-afc5-48eb-bc60-4c24b9b02ab1" TYPE="btrfs" Retrieve the UUID and put it into the fstab [root@lxc ~]# cat >> /etc/fstab << EOF UUID=8a565bfd-2deb-4d02-bd91-a81c4cc9eb54 /container btrfs defaults 0 0 EOF
Step 5: Create btrfs subvolume for the database software and databases.
[root@lxc ~]# btrfs subvolume create /btrfs/u01 Create subvolume '/btrfs/u01' [root@lxc ~]# btrfs subvolume create /btrfs/databases Create subvolume '/btrfs/databases' [root@lxc ~]# chown oracle:dba /btrfs/u01 [root@lxc ~]# chown oracle:dba /btrfs/databases
Step 6: add the hostname into /etc/hosts
[root@lxc btrfs]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 lxc ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
and Install the 12cR1 database software with:
Oracle Base: /btrfs/u01
Software location: /btrfs/u01/product/12.1.0/dbhome_1
Inventory directory: /btrfs/u01/oraInventory
oraInventory Group Name: dba
Step 7: Create a simple database with datafiles, redologs and controlfile located into the /btrfs/databases folder.
Step 8: Create a linux container (using oracle template) that will be the source of all our new containers.
lxc-create --name cont_source -B btrfs --template oracle -- --url http://public-yum.oracle.com -R 6.latest -r "perl sudo oracle-rdbms-server-12cR1-preinstall"
Here we are: we are now ready to clone all of this into a new linux container in seconds without any additional disk usage.
CLONING PHASE
First, let’s take a picture of the current disk usage:
[root@lxc ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_lxc-lv_root 45G 3.1G 40G 8% / tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/sda1 477M 55M 398M 13% /boot /dev/sdb 50G 6.4G 42G 14% /btrfs /dev/sdc 50G 1.1G 48G 3% /container
Clone step 1: Add into /etc/security/limits.conf (If not, you won’t be able to su – oracle into the linux containers)
* soft nofile 1024 * hard nofile 65536 * soft nproc 2047 * hard nproc 16384 * soft stack 10240 * hard stack 32768
and reboot the virtual machine created into step 1.
Clone step 2: clone the linux container created during step 8 to a new one named for example dbhost1.
[root@lxc oradata]# time lxc-clone -s -t btrfs -o cont_source -n dbhost1 Tweaking configuration Copying rootfs... Create a snapshot of '/container/cont_source/rootfs' in '/container/dbhost1/rootfs' Updating rootfs... 'dbhost1' created real 0m0.716s user 0m0.023s sys 0m0.029s
Clone step 3: clone the database software.
[root@lxc oradata]# time btrfs su snapshot /btrfs/u01 /btrfs/u01_dbhost1 Create a snapshot of '/btrfs/u01' in '/btrfs/u01_dbhost1' real 0m0.038s user 0m0.000s sys 0m0.006s
Clone step 4: clone the database (shutdown immediate before)
[root@lxc oradata]# time btrfs su snapshot /btrfs/databases /btrfs/databases_dbhost1 Create a snapshot of '/btrfs/databases' in '/btrfs/databases_dbhost1' real 0m0.041s user 0m0.002s sys 0m0.006s
Clone step 5: Link the new container to this database software and database clones. Edit /container/dbhost1/config and put:
lxc.mount.entry=/btrfs/u01_dbhost1 /container/dbhost1/rootfs/btrfs/u01 none rw,bind 0 0 lxc.mount.entry=/btrfs/databases_dbhost1 /container/dbhost1/rootfs/btrfs/databases none rw,bind 0 0
Clone step 6: Copy dbhome, oraenv and coraenv and start the new container dbhost1
[root@lxc ~]# cp -p /usr/local/bin/coraenv /usr/local/bin/dbhome /usr/local/bin/oraenv /container/dbhost1/rootfs/usr/local/bin [root@lxc oradata]# mkdir -p /container/dbhost1/rootfs/btrfs/u01 [root@lxc oradata]# mkdir -p /container/dbhost1/rootfs/btrfs/databases [root@lxc oradata]# lxc-start -n dbhost1
Clone step 7: connect to the new container (default password for root is root), create the oratab, and start the database.
[root@lxc ~]# lxc-console -n dbhost1 [root@dbhost1 ~]# su - oracle [oracle@dbhost1 dbs]$ . oraenv ORACLE_SID = [BDTDB] ? The Oracle base remains unchanged with value /btrfs/u01 [oracle@dbhost1 dbs]$ echo "startup" | sqlplus / as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Fri Apr 25 08:01:52 2014 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to an idle instance. SQL> ORACLE instance started. Total System Global Area 1570009088 bytes Fixed Size 2288776 bytes Variable Size 905970552 bytes Database Buffers 654311424 bytes Redo Buffers 7438336 bytes Database mounted. Database opened.
Check the disk usage on our virtual machine created into step 1:
[root@lxc ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_lxc-lv_root 45G 3.1G 40G 8% / tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/sda1 477M 55M 398M 13% /boot /dev/sdb 50G 6.4G 42G 14% /btrfs /dev/sdc 50G 1.1G 48G 3% /container
Et voila 😉 we created a new linux container, a new database home and a new database with initially no additional disk space.
Remarks:
Conclusion:
We can create a new container, a new ORACLE_HOME and a new database, reducing resource consumption (specially disk) on our laptop in a few seconds.
Again: I would not suggest to use all of this on a “real” system. But for a test environment on a laptop it sound goods to me.
I hope you will save some disk space on your laptop thanks to this ;-).