Home / Educational Content / Database & Technology / SELECT Journal / Sharding Database Infrastructure Horizontally: Sharding in Oracle 12.2 – Part 3 [Testing OracleSharding]

Sharding Database Infrastructure Horizontally: Sharding in Oracle 12.2 - Part 3 [Testing OracleSharding]

By Nassyam Basha, Data Intensity | Edited by Simon Pane

This article is the final installment in a three-part series on the topic of sharding. Part one provided background and reviewed in depth the architecture of Oracle Sharding and the various technical components. Part two focused on prerequisites and the initial deployment steps.

In this final article, we will walk through the remainder of the setup – post the initial deployment. The main purpose of this article is to test and demonstrate that the data is indeed being distributed across the shards. We will first create a sharded table and then see how the “chunks” will be sharded across all the shard databases. And finally, we will review the various logs which are useful for troubleshooting.

Post Deployment Verification Checks of the Oracle Sharding Configuration

Services Configuration

From GDSCTL we can see the list of the databases associated with the sharding configuration. We will create two additional services: one named FINRW for the primary database and another named PSFIN_NVISION for the standby database.  The later will be used for reporting purposes.

GDSCTL>databases
Database: “sh1” Registered: Y State: Ok ONS: N. Role: PRIMARY Instances: 1 Region: canada
Alert: Data Guard observer is not running.
Registered instances:
shcat%1
Database: “sh2” Registered: Y State: Ok ONS: N. Role: PRIMARY Instances: 1 Region: canada
Alert: Data Guard observer is not running.
Registered instances:
shcat%11
Database: “sh3” Registered: Y State: Ok ONS: N. Role: PH_STNDBY Instances: 1 Region: india
Registered instances:
shcat%21
Database: “sh4” Registered: Y State: Ok ONS: N. Role: PH_STNDBY Instances: 1 Region: india
Registered instances:
shcat%31

GDSCTL>

 

While creating services we must specify which service belongs to which database role. We have various options in adding the service in terms of failover, preferred options, etc. After adding the services, we need to start them manually.

GDSCTL>add service -service FINRW -preferred_all -role primary
The operation completed successfully
GDSCTL>add sservice -service PSFIN_NVISION -preferred_all -role physical_standby -failover_primary
The operation completed successfully
GDSCTL>start service -service FINRW, PSFIN_NVISION
The operation completed successfully
GDSCTL>services
Service “finrw.shcat.oradbcloud” has 2 instance(s). Affinity: ANYWHERE
Instance “shcat%1”, name: “sh1”, db: “sh1”, region: “canada”, status: ready.
Instance “shcat%11”, name: “sh2”, db: “sh2”, region: “canada”, status: ready.
Service “psfin_nvision.shcat.oradbcloud” has 2 instance(s). Affinity: ANYWHERE
Instance “shcat%21”, name: “sh3”, db: “sh3”, region: “india”, status: ready.
Instance “shcat%31”, name: “sh4”, db: “sh4”, region: “india”, status: ready.

GDSCTL>

Disable FSFO-Observer (optional)

In part two of this article series, we completed all of the required setup up to the point of deployment of sharding. One important point to keep in mind is that in this case, we are deploying Sharding with Data Guard. (Sharding by default deploys with the Data Guard Broker and Fast Start Failover (FSFO) configuration.)

If this Shard deployment is for testing purpose only or if FSFO is not required then we can stop the observer and disable the FSFO. Recognizing that doing so means that if for some reason the primary database is inaccessible or has no response, then the observer will not initiate a failover to the standby and we will have to perform various other steps again to reinstate the shards and re-establish everything.

Hence based on the business requirement, we can disable or enable FSFO.

The following steps show how to disable the FSFO.

DGMGRL> show configuration

Configuration – sh1

  Protection Mode: MaxPerformance
Members:
sh1 – Primary database
sh3 – Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 44 seconds ago)

DGMGRL> show fast_start failover

Fast-Start Failover: ENABLED

  Threshold:          30 seconds
Target:             sh4
Observer:           ORA-C2.localdomain
Lag Limit:          30 seconds
Shutdown Primary:   TRUE
Auto-reinstate:     TRUE
Observer Reconnect: (none)
Observer Override:  FALSE

Configurable Failover Conditions
Health Conditions:
Corrupted Controlfile          YES
Corrupted Dictionary           YES
Inaccessible Logfile            NO
Stuck Archiver                  NO
Datafile Write Errors          YES

  Oracle Error Conditions:
(none)

DGMGRL>
DGMGRL> disable fast_start failover force
Disabled.
DGMGRL>  show fast_start failover

Fast-Start Failover: DISABLED

  Threshold:          30 seconds
Target:             (none)
Observer:           ORA-C2.localdomain
Lag Limit:          30 seconds
Shutdown Primary:   TRUE
Auto-reinstate:     TRUE
Observer Reconnect: (none)
Observer Override:  FALSE

Configurable Failover Conditions
Health Conditions:
Corrupted Controlfile          YES
Corrupted Dictionary           YES
Inaccessible Logfile            NO
Stuck Archiver                  NO
Datafile Write Errors          YES

  Oracle Error Conditions:
(none)

DGMGRL

Prepare User and Create Sharded Tables

In traditional databases, we would create generic tables using the normal syntax but in a sharding configuration, we have to create a sharded table using new DDL syntax to realize the benefit of the Sharding feature. Oracle will distribute the rows on all the available shards (in sharding terminology we can say the “chunks” will be allocated to all the shards).

This brings about an interesting question: Where will the Sharded table be created?

A) The Sharded catalog database
B) The various Shards – sh1, sh2?

Answer: The sharded table will be created only in Sharded catalog database, but the actual data (“chunks”) will be distributed to the shard databases as per our configuration.

So we will create the sharded tables in sharded catalog database (SDB) only. Note that (as detailed in the previous articles in this series) the SDB should be created by the DBCA using Oracle Managed Files (OMF). If not already configured with OMF then it will be mandatory to set the required initialization parameter before creating sharding tables.

SQL> alter system set db_create_file_dest=’/u01/app/oracle/oradata’;

After setting the db_create_file_dest parameter, we can create the tablespace (using the new “CREATE TABLESPACE SET” command for creating a sharded tablespace set), user, and grant privileges as required.

Notice that we need to start by altering our session using a new sharding specific session setting command.  This is to enable sharding DDL meaning that the DDL commands issued will be applicable to the shard catalog database and all shards.

 

-bash-4.2$ . oraenv
ORACLE_SID = [gsm] ? shcat
The Oracle base has been changed from /home/app/oracle to /u01/app/oracle
-bash-4.2$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Sun Jul 23 08:31:19 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production

SQL> alter session enable shard ddl;

Session altered.

SQL> create user sysadm identified by sysadm;

User created.

SQL> grant connect, resource, alter session to sysadm;

Grant succeeded.

SQL> grant execute on dbms_crypto to sysadm;

Grant succeeded.

SQL> grant create table, create procedure, create tablespace, create materialized view to sysadm;

Grant succeeded.

SQL> grant unlimited tablespace to sysadm;

Grant succeeded.

SQL> grant select_catalog_role to sysadm;

Grant succeeded.

SQL> grant all privileges to sysadm;

Grant succeeded.

SQL> grant gsmadmin_role to sysadm;

Grant succeeded.

SQL> grant dba to sysadm;

Grant succeeded.

SQL> CREATE TABLESPACE SET PSDATA using template (datafile size 100m autoextend on next 10M maxsize unlimited extent management local segment space management auto );

Tablespace created.

SQL> CREATE TABLESPACE PSDATADUP datafile size 100m autoextend on next 10M maxsize unlimited extent management local uniform size 1m;

Tablespace created.

Now that the prerequisites for sharded table creation are complete, we can create sharded tables:

Connected.
SQL> connect sysadm/sysadm
SQL> show user
USER is “sysadm”
SQL> alter session enable shard ddl;

Session altered.

SQL> CREATE SHARDED TABLE student_info
(
studId VARCHAR2(60) NOT NULL,
FirstName VARCHAR2(60),
LastName VARCHAR2(60),
Class VARCHAR2(10),
password RAW(20),
StudProfile VARCHAR2(40),
CONSTRAINT pk_studen_info PRIMARY KEY (studid),
CONSTRAINT json_student_info CHECK (studProfile IS JSON)
) TABLESPACE SET PSDATA
PARTITION BY CONSISTENT HASH (studId) PARTITIONS AUTO;

Table created.

SQL>  CREATE SHARDED TABLE exam_info
2  (
Hallticket_no INTEGER NOT NULL,
studId VARCHAR2(60) NOT NULL,
resultsdate TIMESTAMP NOT NULL,
SumTotal NUMBER(19,4),
Status CHAR(4),
constraint pk_exam_info primary key (studId, hallticket_no),
constraint fk_exam_info foreign key (studId)
references student_info on delete cascade
) partition by reference (fk_exam_info)
TABLESPACE PSDATA;
3    4    5    6    7    8    9   10   11
Table created.

SQL> CREATE SEQUENCE exam_info_seq;

Sequence created.

SQL> CREATE SHARDED TABLE schooldata
(
Hallticket_no INTEGER NOT NULL,
studId VARCHAR2(60) NOT NULL,
Reg_no INTEGER NOT NULL,
constraint pk_data primary key (studId, Hallticket_no, Reg_no),
constraint fk_data_main foreign key (studId, Hallticket_no)
references exam_info on delete cascade
) partition by reference (fk_data_main)

TABLESPACE PSDATA;

Table created.

SQL>

“Duplicated” tables are also created in the Shard Catalog. Unlike Shard tables where only a subset the table’s rows are copied to the appropriate Shard database, the entire contents of duplicated tables are presented in each Shard database.

SQL> CREATE DUPLICATED TABLE studdup
(
reg_no INTEGER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
summary VARCHAR2(128)
)  TABLESPACE PSDATADUP;

Table created.

SQL>

Other non-table objects are also created in the Shard Catalog database only and are automatically propagated out to the Shard Databases:

SQL> CREATE OR REPLACE FUNCTION PasswCreate(password IN RAW)
RETURN RAW
IS
Salt RAW(8);
BEGIN
Salt := DBMS_CRYPTO.RANDOMBYTES(8);
RETURN UTL_RAW.CONCAT(Salt, DBMS_CRYPTO.HASH(UTL_RAW.CONCAT(Salt, password), DBMS_CRYPTO.HASH_SH256));
END;
/

Function created.

SQL> CREATE OR REPLACE FUNCTION PasswCheck(password IN RAW, PHASH IN RAW)
RETURN INTEGER IS
BEGIN
RETURN UTL_RAW.COMPARE(
DBMS_CRYPTO.HASH(UTL_RAW.CONCAT(UTL_RAW.SUBSTR(PHASH, 1, 8), password), DBMS_CRYPTO.HASH_SH256),
UTL_RAW.SUBSTR(PHASH, 9));
END;
/

Function created.

SQL>

Testing the Sharded Table

To review the Shard implementation, we can run some basic queries and commands from the SDB:

SQL> select tablespace_name,bytes/1024/1024,user_bytes/1024/1024,maxbytes/1024/1024,autoextensible from dba_data_files where tablespace_name in (‘PSDATA’,’PSDATADUP’);

TABLESPACE_NAME                BYTES/1024/1024 USER_BYTES/1024/1024 MAXBYTES/1024/1024 AUT

—————————— ————— ——————– —————— —

PSDATA                                     100             93.96875           33554432 YES

PSDATADUP                                  100                   99         32767.9844 YES

SQL>

-bash-4.2$ gdsctl
GDSCTL: Version 12.2.0.1.0 – Production on Sun Jul 23 10:00:37 IST 2017

Copyright (c) 2011, 2016, Oracle.  All rights reserved.

Welcome to GDSCTL, type “help” for information.

Current GSM is set to SHARDDIR_DG
GDSCTL>show ddl
Catalog connection is established

id      DDL Text                                 Failed shards

—      ——–                                 ————-

26      CREATE SHARDED TABLE student_info ( s…
27       CREATE SHARDED TABLE exam_info ( Hal…
28      CREATE SEQUENCE exam_info_Seq
29      CREATE SHARDED TABLE schooldata ( Hal…
30      CREATE MATERIALIZED VIEW “psfinadm”.”ST…
31      CREATE OR REPLACE FUNCTION PasswCreat…
32      CREATE OR REPLACE FUNCTION PasswCheck…
33      CREATE OR REPLACE FUNCTION PasswCheck…

GDSCTL>

 

The objective of the sharding is to have the chunks evenly distributed among the Shard databases.  We can verify that this has indeed been accomplished using a simple query:

SQL> select a.name Shard, count( b.chunk_number) Number_of_Chunks from gsmadmin_internal.database a, gsmadmin_internal.chunk_loc b where a.database_num=b.database_num group by a.name;

SHARD                          NUMBER_OF_CHUNKS
—————————— —————-
sh1                                           6
sh2                                           6
sh3                                           6
sh4                                           6

SQL>  select table_name,partition_name from user_tab_partitions;

TABLE_NAME                    PARTITION_NAME
—————————— ————————————————–
EXAM_INFO                     STUDENT_INFO_P1
SCHOOLDATA                  STUDENT_INFO_P1
STUDENT_INFO                STUDENT_INFO_P1

From the output above we can see a successful shard implementation where the chunks are indeed evenly distributed among the shard databases.

Connectivity Using Easy Connect

To run cross-shard queries, we should connect to the “GDS$CATALOG” service from any of the shards. To connect to the shard catalog database we use the EZconnect format. (Ensure the service “GDS$CATALOG” was registered the listener.)

Service_name
Services Summary…
Service “GDS$CATALOG.oradbcloud” has 1 instance(s).
Instance “shcat”, status READY, has 1 handler(s) for this service…
Service “shcat” has 2 instance(s).

-bash-4.2$ sqlplus sysadm/sysadm@ora-ccrep.localdomain:1525/GDS\$CATALOG.oradbcloud

SQL*Plus: Release 12.2.0.1.0 Production on Sun Oct 22 15:37:30 2017
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Last Successful login time: Sun Oct 22 2017 15:37:20 +05:30
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production

SQL> show parameter db_name

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_name                              string      shcat
SQL>

 

Question: Is it possible to have more shards in primary site and fewer shards in standby site?

Answer: No, you should have same number of shards on both primary and standby sites.

Finally, we can see new sharding specific SQL operations in execution plans.  For example:

 

SQL> select * from schooldata;

Execution Plan
————————————————————
Plan hash value: 2953441084

—————————————————————–
| Id  | Operation       | Name | Cost (%CPU) | Inst   | IN-OUT  |
——————————————————————
|   0 | SELECT STATEMENT |      |     0   (0) |        |         |
|   1 |  SHARD ITERATOR  |      |             |        |         |
|   2 |   REMOTE        |      |             | ORA_S~ | R->S    |
——————————————————————

Remote SQL Information (identified by operation id):

—————————————————–

   2 – EXPLAIN PLAN SET STATEMENT_ID=’PLUS330062′ INTO PLAN_TABLE@! FOR
        SELECT “A1″.”HALLTICKET_NO”,”A1″.”STUDID”,”A1″.”REG_NO” FROM
        SCHOOLDATA” “A1” /* coord_sql_id=bj6cbuthszkpb */  (accessing
ORA_SHARD_POOL@ORA_MULTI_TARGET’ )

Troubleshooting (Using the Log Files)

It can sometimes seem challenging or difficult for DBAs to troubleshoot and fix issues with Sharding configurations. To properly diagnose and fix problems, we have to look into various log files depending on the issue we are facing.  It may be necessary to look at sharding related logs, database logs, broker logs, etc.

Repository GSM Log

If we encounter any issues with GSM then we have dedicated log and associated trace files for the Global Service manager on each director. The trace file specifics can be shown by using the “status gsm” command from GDSCTL.  And from the gsm.log we can see the sessions and services details:

/home/app/oracle/diag/gsm/ora-ccrep/sharddir_dg/trace
-bash-4.2$ tail -10f alert_gsm.log
22-OCT-2017 15:13:34 * service_update * shcat%11 * 0
22-OCT-2017 15:13:35 * service_update * shcat%21 * 0
2017-10-22T15:13:59.366867+05:30
22-OCT-2017 15:13:59 * service_update * shcat%31 * 0
2017-10-22T15:14:00.383964+05:30
22-OCT-2017 15:14:00 * service_update * SHARDDIR_DG * 0
2017-10-22T15:14:04.409469+05:30
22-OCT-2017 15:14:04 * service_update * shcat%1 * 0
22-OCT-2017 15:14:04 * service_update * shcat%11 * 0
22-OCT-2017 15:14:05 * service_update * shcat%21 * 0

GDSCTL>status gsm
Alias                     SHARDDIR_DG
Version                   12.2.0.1.0
Start Date                17-OCT-2017 08:35:39
Trace Level               off
Listener Log File         /home/app/oracle/diag/gsm/ora-ccrep/sharddir_dg/alert/log.xml
Listener Trace File       /home/app/oracle/diag/gsm/ora-ccrep/sharddir_dg/trace/ora_8131_140327754584448.trc
Endpoint summary          (ADDRESS=(HOST=ora-ccrep.localdomain)(PORT=12121)(PROTOCOL=tcp))
GSMOCI Version            2.2.1
Mastership                Y
Connected to GDS catalog  Y
Process Id                8134
Number of reconnections   0
Pending tasks.     Total  0
Tasks in  process. Total  0
Regional Mastership       TRUE
Total messages published  8795
Time Zone                 +05:30
Orphaned Buddy Regions:
None
GDS region                europe

GDSCTL>

Deployment Logs

If there are any issues when deploying the shard configuration we will unfortunately only see one or two lines in the CLI interface. Therefore, to get additional detail on any issues encountered we may have to review the log files to find the actual cause of the issue.

The sharding deployment is sequential going from node to node serially. If the deployment of the primary shard sh1 fails for some reason, there will be minimal information visible in the CLI interface.  Hence, to diagnose we will have to review the deployment log which can be found under  “$ORACLE_BASE/cfgtoollogs/dbca/<shardname>”, for example
/u01/app/oracle/cfgtoollogs/dbca/sh1/trace.log_<timestamp>

[Thread-96] [ 2017-07-23 06:30:29.460 NZST ] [StepErrorHandler.setIgnorableErrors:267]  setting Ignorable Error: ORA-06510
[Thread-96] [ 2017-07-23 06:30:29.461 NZST ] [StepErrorHandler.setIgnorableErrors:267]  setting Ignorable Error: ORA-01403
[Thread-96] [ 2017-07-23 06:30:29.461 NZST ] [BasicStep.configureSettings:383]  messageHandler being set=null
[Thread-96] [ 2017-07-23 06:30:29.570 NZST ] [SQLEngine.setSpool:2084]  old Spool  = null[Thread-96] [ 2017-07-23 06:30:29.570 NZST ] [SQLEngine.setSpool:2085]  Setting Spool  = /u01/app/oracle/cfgtoollogs/dbca/sh1/cloneDB
Creation.log
[Thread-96] [ 2017-07-23 06:30:29.571 NZST ] [SQLEngine.setSpool:2086]  Is spool appendable? –> true
[Thread-96] [ 2017-07-23 06:30:29.572 NZST ] [CloneDBCreationStep.executeImpl:382]   size of datfiles in create db 4
[Thread-96] [ 2017-07-23 06:30:29.573 NZST ] [CloneDBCreationStep.executeImpl:408]  a.createCtlFileSql = Create controlfile reuse set database “sh1”
MAXINSTANCES 8
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
Datafile
‘/u01/app/oracle/oradata/sh1/sysaux01.dbf’,
‘/u01/app/oracle/oradata/sh1/system01.dbf’,
[Thread-96] [ 2017-07-23 06:30:29.573 NZST ] [CloneDBCreationStep.executeImpl:412]  b.createCtlFileSql = Create controlfile reuse set database “sh1”

Also, we can see the associated files that are part of deployment as well.

[oracle@ORA-C1 sh1]$ pwd
/u01/app/oracle/cfgtoollogs/dbca/sh1
[oracle@ORA-C1 sh1]$ ls -ltr
total 19008
-rw-r—–. 1 oracle oinstall        0 Jul 23 06:28 rmanUtil
-rw-r—–. 1 oracle oinstall 18726912 Jul 23 06:30 tempControl.ctl
-rw-r—–. 1 oracle oinstall      379 Jul 23 06:30 CloneRmanRestore.log
-rw-r—–. 1 oracle oinstall     2822 Jul 23 06:31 cloneDBCreation.log
-rw-r—–. 1 oracle oinstall        8 Jul 23 06:31 postScripts.log
-rw-r—–. 1 oracle oinstall        0 Jul 23 06:32 lockAccount.log
-rw-r—–. 1 oracle oinstall     1274 Jul 23 06:33 postDBCreation.log
-rw-r—–. 1 oracle oinstall      334 Jul 23 06:34 customScripts.log
-rw-r—–. 1 oracle oinstall      915 Jul 23 06:34 sh1.log
-rw-r—–. 1 oracle oinstall   706980 Oct 22 20:21 trace.log_2017-07-23_06-28-03-AM
[oracle@ORA-C1 sh1]$

Another key log file is “<shardname>.log” which will record the overall status and progress of the shard deployment

[oracle@ORA-C1 sh1]$ cat sh1.log
[ 2017-07-23 06:28:14.295 NZST ] Copying database files
DBCA_PROGRESS : 1%
DBCA_PROGRESS : 2%
DBCA_PROGRESS : 16%
DBCA_PROGRESS : 30%
[ 2017-07-23 06:30:29.457 NZST ] Creating and starting Oracle instance
DBCA_PROGRESS : 32%
DBCA_PROGRESS : 36%
DBCA_PROGRESS : 40%
DBCA_PROGRESS : 44%
DBCA_PROGRESS : 45%
DBCA_PROGRESS : 48%
DBCA_PROGRESS : 50%
[ 2017-07-23 06:32:00.969 NZST ] Completing Database Creation
DBCA_PROGRESS : 51%
DBCA_PROGRESS : 52%
DBCA_PROGRESS : 53%
DBCA_PROGRESS : 56%
DBCA_PROGRESS : 59%
DBCA_PROGRESS : 60%
[ 2017-07-23 06:33:13.537 NZST ] Executing Post Configuration Actions
DBCA_PROGRESS : 90%
[ 2017-07-23 06:33:13.539 NZST ] Running Custom Scripts
DBCA_PROGRESS : 100%
[ 2017-07-23 06:34:37.047 NZST ] Database creation complete. For details check the logfiles at:
/u01/app/oracle/cfgtoollogs/dbca/sh1.
Database Information:
Global Database Name:sh1.localdomain
System Identifier(SID):sh1
[oracle@ORA-C1 sh1]$

Troubleshooting Using GDSCTL 

Using GDSCTL we can perform some troubleshooting such as checking the databases and services status. The beauty of GDSCTL in Sharding can be described with two key benefits: validation and DDL tracking.

Validate Using GDSCTL

The entire configuration can be validated from GDSCTL with single command.  It can validate regions, databases, Shard directors, DDL operations, etc. The example below shows connection errors (as an example) due to listeners being down:

GDSCTL>validate
Validation results:

Catalog connection is established
VLD2: Region “europe” does not have buddy region
VLD2: Region “canada” does not have buddy region
VLD2: Region “india” does not have buddy region
VLD9: Region “europe” does not contain any databases
VLD10: Region “canada” does not contain any GSMs
VLD10: Region “india” does not contain any GSMs
VLD49: Database sh2: last applied DDL number is wrong. On Shard: 9, in catalog: 48
VLD49: Database sh1: last applied DDL number is wrong. On Shard: 9, in catalog: 48
VLD49: Database sh4: last applied DDL number is wrong. On Shard: 9, in catalog: 48
VLD49: Database sh3: last applied DDL number is wrong. On Shard: 9, in catalog: 48
VLD24: GSM “SHARDDIR_DG” is not connected to any GSM from GDS region “canada”
VLD24: GSM “SHARDDIR_DG” is not connected to any GSM from GDS region “india”

 Total errors: 12.
GDSCTL>config shard
Name                Shard Group         Status    State       Region    Availability
—–                ———–         ——    —–       ——    ————
sh1                 primary_canada_shg  Ok        DDL error   canada    ONLINE
sh2                 primary_canada_shg  Ok        DDL error   canada    ONLINE
sh3                 standby_india_shg   Ok        Deployed    india     READ ONLY
sh4                 standby_india_shg   Ok        Deployed    india     READ ONLY

GDSCTL>

DDL Tracking from GSM

Tracking all the DDL changes of the shards is easy using GSM.  Effectively, all operations it performs are audited across the shards.  Hence we can review the DDL command issued, and at the same time can also check if there were any DDL errors (failures).

GDSCTL>show ddl

id      DDL Text                                 Failed shards
—      ——–                                 ————-
39      CREATE TABLESPACE SET PSFINDATA using…
40      CREATE TABLESPACE PSFINDUP datafile s…
41      CREATE SHARDED TABLE student_info ( s…
42      CREATE SHARDED TABLE exam_info ( Hall…
43      CREATE SHARDED TABLE schooldata ( Hal…
44      CREATE MATERIALIZED VIEW “SYSADM”.”ST…
45      CREATE OR REPLACE FUNCTION PasswCreat…
46      CREATE OR REPLACE FUNCTION PasswCheck…
47      create tablespace test
48      purge recyclebin

GDSCTL>

Summary

In this last Sharding article, we’ve seen how we can monitor and check the Sharding configuration.  We also  included a few other optional steps to consider such as disabling FSFO if it is not a business requirement. We performed actual testing of sharding by creating sharded tables and confirmed that the chunks were sharded properly across the shards.

Finally, we concluded with another critical step: troubleshooting the GSM configuration and learning about the various logs we may have to through when investigating issues.

About the Author

Nassyam Basha is a Database Administrator. He has around 10 years of experience as a Production Oracle DBA, currently working as Senior Principal Consultant at Data Intensity. He holds a master’s degree in Computer Applications from the University of Madras. He is an Oracle 11g Certified master an Oracle ACE Director. He actively participates in Oracle-related forums such as OTN with a status of Super Hero, Oracle support awarded as “Guru” and acting as OTN Moderator and written numerous articles with OTN and on Toad World. He maintains an Oracle technology-related blog, www.oracle-ckpt.com and can be reached at nassyambasha@gmail.com.

Sharding Database Infrastructure Horizontally: Sharding in Oracle 12.2 - Part 3 [Testing OracleSharding]