SlideShare a Scribd company logo
CUBRIDDeveloper's CourseAuthor: Bomyung OhTeam / Department: DBMS Development LabAuthor(2): Kyungsik SeoTeam / Department: DBMS Development Lab
Comparison of the featuredevelopment speed with MySQLCUBRIDCUBRID ClusterClusterR3.2SQL Compatibility
CUBRID FBOR3.1R3.0HA Feature
Hierarchical QueryR2.0Views
Triggers
Stored Procedure
AUTO_INCREMENT
Query Plan Cache
Query Result Cache
Replication
Partitioning
Click CounterR1.0MySQL5.55.45.15.0Views
Triggers
Stored Procedures
AUTO_INCREMENT
Query Cache
Replication
Full Text Indexing
Partitioning
Event scheduler
MySQL Cluster
XML Functions4.14.03.232003200120022004200520062007200820092010
Who are using CUBRID Over 100,000 Downloads
Introduction to CUBRIDOverview and Architecture of CUBRIDUsing CUBRIDIntroduction to CUBRID HA
1.1 Overview and Architecture of CUBRID
What is CUBRID?IntroductionCUBRIDis a comprehensive open source relational database management system that is highly optimized for Web Applications, particularly those with Read-intensive transactions. Koreahttp://dev.naver.com/projects/cubrid/http://www.cubrid.com/online_manual/cubrid_830/index.htmhttp://www.cubrid.comhttp://devcafe.nhncorp.com/g_cubridGlobalhttp://www.cubrid.org/http://wiki.cubrid.org/index.php/CUBRID_Manuals/cubrid_2008_R3.0_manualYou Tubehttp://www.youtube.com/user/cubrid
CUBRIDArchitecture (Simplified)A 3-tier structure that separates DB Servers from BrokersBroker : DB Server = 1 : N is possibleApplication ClientJava AppsCUBRID ManagerWASWASQuery EditorDB InterfaceJDBC driverManager port: 8001,8002JDBCJDBCconnectBroker port: 30000Middlewarecub_brokercub_autoBroker1Broker2cub_jobcub_cassend_fdconnectServer port: 1523DBServercub_mastercub_autoconnectServer port: 1523send_fdcub_jobcub_serverDB Server2DB Server1DataVolume2DataVolume1volume filelog filevolume filelog file
CUBRIDArchitecture (Detailed)CUBRIDManagerGUICUBRIDManagerInterfaceODBCCCIPHPOLE DBPythonRubyJDBCCM ServerBrokerJobQueuingMonitoringConnectionPoolingLoggingClient LibraryNative C APIParserObjectManagerSchemaManagerTransactionManagerQueryTransformWorkspace ManagerQueryOptimizerMemory ManagerPlanGenerationCommunication ModuleServerAdminUtilityCommunication ModuleCreate, Delete, Copy, RenameTransactionManagerLogManagerLockManagerQueryManagerAccessMethodB+TreeModuleFile ManagerSystemCatalog ModuleAdd VolumeBuffer ManagerLoad /UnloadDisk ManagerBackup /RestoreActiveLogCompact /OptimizeFile BasedObjectsDataVolumeIndexVolumeTempVolumeCheck /DiagArchiveLog
CUBRIDProcess (Detailed)JDBC driverCCI libraryAPIconnectquery &resultport listeningquery &resultFilecub_brokercubrid_broker.confforkparsedescriptor passProcesscub_cascub_casshared memorycsqlcubridcs.socubridcs.soDynamic shared libraryconnectcubridcs.sorequest &responseTCPjob queuemulti-threadport listeningparsedescriptor passparsecub_mastercub_servercubrid.confUDScubrid.somount(read/write)registerreadvolume filelog filedatabases.txtvolume filelog filecub_admincubridsa.so
1.2 Using CUBRID
Prerequisites for InstallationDownload CUBRIDhttp://sourceforge.net/projects/cubridCheck supported platforms(Linux/Windows)uname –r rpm –qa | grepglibcInstall JRE version 1.5 or higher and set up the environment variables(CUBRID Manager)http://java.sun.com/javase/downloads/index.jspFor LinuxFor WindowsVisual C++ 2008 distribution pack installationCreate DB users(multiple instances)http://www.microsoft.com/downloads/details.aspx?displaylang=ko&FamilyID=9b2da534-3e03-4391-8a4d-074b9f2bc1bfInstall and launch CUBRID
CUBRIDInstallation and Starting CUBRID ServiceHow to install CUBRID and start CUBRID Service in the Windows environment
For detailed information, see the manual provided at the following link:. Run the exe file to start the installation wizard.http://www.cubrid.org/manual/gs/gs_install_windows.htm
Starting CUBRID Service in the CUBRIDtray
How to install CUBRID and start CUBRID Service in the Linux environment
For detailed information, see the manual provided at the following link: http://www.cubrid.org/manual/gs/gs_install_linux.htmStarting CUBRID Service (CUBRID-related processes must be started)For detailed information, see the manual provided at the following link:http://www.cubrid.org/manual/gs/gs_must_svcstart.htmStart the CUBRID service by using the following command:% sh CUBRID-8.3.0.0312-linux.x86_64.sh% . /home1/cub_user/.cubrid.sh% cubrid service start
DB Creation and DB StartHow to create a new DB and start itFor detailed information, see the manual provided at the following link:http://www.cubrid.org/manual/admin/admin_db_create_create.htmCreating testdb and starting it with a command
Starting an existing DB(demodb is included in the installation of CUBRID, by default)For detailed information, see the manual provided at the following link:http://www.cubrid.org/manual/gs/gs_must_svcstart.htmCreating demodb and starting it with a command% cubridcreatedbtestdb	% cubrid server start testdb   % cubrid server start demodb
CUBRID Manager - ConfigurationJava-based GUI tools JRE/JDKversion 1.6 or higher is required
CUBRID Manager is a tool used to control the functions of servers and brokers, andto monitor and analyze logs
CUBIRD Manager consists of the search pane to the left, the query edit pane to the right, the top menu, and the toolbarCUBRID Manager – startStart CUBRID Server Start CUBRID Manager inserthost connection information(Default manager accountID: admin / PW: admin)insert DB connection information(Default DB accountID: dba  / PW: No password)Start DB ServerExecute queries
CUBRID Manager - stopStop DB ServerDisconnect from the hostStop CUBRID Manager
1.3 Introduction to CUBRID HA
Introduction to CUBRID HA Replication
No-Automatic Fail-over  No-Automatic Sync
HA
Automatic Fail-over  Automatic SyncHA Configuration and Usage– DB Server RedundancyAPWeb ServerAPWeb ServerFail-backFail-overBroker #2Broker #1Automatic failoverActiveServerStandbyServerNode FailAutomatic failoverReplication
HA Configuration and Usage– Broker RedundancyAPWeb ServerAPWeb ServerJDBC DriverCCI LibraryAutomatic failoverFail-backFail-overBroker #2Broker #1Node FailActiveServerStandbyServerReplication
Diagram of HA Architecture (Detailed)AsyncUpdateSelectA-NodeActive Server NodeS1-NodeStandby Server NodeapplylogdbcoyplogdbapplylogdbcoyplogdbServerActiveReplicaStandbySemi-SyncSyncactive logarchive logsA-node’sactive & archive logsS1-node’sactive & archive logsactive logarchive logsReplication Log is not includedReplication Log is included#Configurations##A-Node’s log pathS1-Node’s active & archive logs    = $CUBRID_DATABASES/database-name_S1-Node-hostname(ex. /home1/cubrid1/DB/tdb01_Snode1)copylogdb & applylogdberror logs     = $CUBRID/log#S1-Node’s log pathA-Node’s active & archive logs    = $CUBRID_DATABASES/database-name_A-Node-hostname(ex. /home1/cubrid1/DB/tdb01_Anode1)copylogdb & applylogdberror logs     = $CUBRID/log#Configurations#A-node & S1-node’s <cubrid.conf>ha_mode=yesha_node_list=hagrpname@A-node:S1-nodeA-node & S1-node’s <cubrid-ha>CUBRID_USER=usernameDB_LIST=‘dbname‘broker node’s <databases.txt>dbnamevol_pathA-node:S1-nodelog_path
2. CUBRIDArchitectureCUBRIDVolume StructureCUBRIDParametersBroker ParametersError Log FileSystem Catalog
2.1 CUBRIDVolume Structure
CUBRIDVolume Structure* : The table is mapped to aCUBRID file.
**: A CUBRID file can be separated to multiple CUBRID volumes.File_1File_2File_3Free_PagesVolumes
DB Volume Structure
DBVolume – InformationVolumeInformation Volumes
Data volume
Saves the data of an application, such as tables or records
A record storage file, called heap, is created in a data volume
Index volume
A volume in which B+Tree indexes are saved for faster data access or queries
Temp volume
A volume in which intermediate results are saved to fetch result sets that exceed the size of the memory buffer, or to execute join queries
A temporary volume with an appropriate size must be created when creating a DB volume.
This is a permanent volume that is used for temporary purposes, and is different from temporary volumes that are used only temporarily.
Generic volume
The initial volume during DB creation, which can be used as the data, index, or temp volume.
If the usage of the volume (data, index, or temporary) is not specified, it can be used for general purposes.DB Volume – Log VolumeLog Volumes
The active log volume includes the most recent updates that have been applied to a database.
Records the status of a committed, aborted, or active transaction.
It is used to recover a DB from a storage media failure.
When the space allocated to an active log is completely used up, the content of the active log will be copied to and stored in a new log (archive log).
Example: demodb_lgat(active log), demodb_lgar*(arcive log)DB Volume – Control VolumeControl Information Volumes
Volume Information
Includes the location information on DB volumes to be created or added
This file cannot be manually modified, deleted, or moved.
The name of the file is in {dbname}_vinf format.
Log Information
Records the information of the current logs and archive logs
Records the information on a new archive log file and unnecessary archive log file.
The name of the file is in {dbname}_lginf format-5 C:\CUBRID\databases\demodb\demodb_vinf-4 C:\CUBRID\databases\demodb\demodb_lginf-3 C:\CUBRID\databases\demodb\demodb_bkvinf-2 C:\CUBRID\databases\demodb\demodb_lgat0 C:\CUBRID\databases\demodb\demodb1 C:\CUBRID\DATABA~1\demodb\demodb_x0010COMMENT: CUBRID/LogInfo for database /CUBRID/databases/demodbACTIVE: /CUBRID/databases/demodb_lgat 5000 pagesARCHIVE: 0  /CUBRID/databases/demodb_lgar000  0  4997COMMENT: Log archive /CUBRID/databases/demodb_lgar000 is not needed any longer unless a database media crash occurs.
DB Volume – Backup VolumeBackup Volume Information
Records the location and backup information of a backup volume
Located in the same path in which log files are stored.
The name of the file is in {dbname}_bkvinf format.0  0  /Backup/demodb_bk000 	0 level full backup of the first file. 0  1  /Backup/demodb_bk001 	0 level full backup of the second file. 1  0  /Backup/demodb_bk100 	1 level incremental backup of the first file. 2  0  /Backup/demodb_bk200 	2 level incremental backup of the first file.The path information of a backupfileBackup levelinformationThe sequence number of a backup volume per level
DB Volume – $CUBRID/conf/databases.txtdatabases.txt
Contains the name, path, and the name of the built host of a DB.
Records the information related to the DB that is created in the databases.txt file upon the creation of a DB.
Saved to the path in which the $CUBRID_DATABASES environment variables are specified.
If it does not exist in the directory specified by the environment variable, the current directory will be used instead.
Caution
If a host name has been changed or a DB deleted by an OS command, this file must be modified as well.
As the user must be able to modify the databases.txt file during DB creation or deletion, the user must have the privilege to write to this file. If a user without the appropriate privilege attempts to create a DB, the DB creation will fail. For this reason, a DBA should enable the user-write privilege for the directory, or create a databases.txt file in the directory of each user and configure the environment variables.demodb   /CUBRID/databases/demodb  hostname /CUBRID/databases/demodbDB nameDB pathHost nameDB log path
DB Volume ManagementAn example of volume configurationdisk1disk3disk2db1db1_tempdb1_logdb1_datadb1_indexdb_backupDistributes according to usage to avoid the disk bottlenecks
Distributes data, index, temp, and log volume so that they are separated from each other
Avoids the disk bottlenecks and improves disk management
Distributes volumes that can be used simultaneously
data & log,  data & index,  data & temp
Configures a volume to an appropriate size to prevent it from adding more volumes while in service
Data, Index, Temp, Active Log: Page size and the number of pages must be considered
Backup: Backs up with the -r option, and then deletes unnecessary archive logs2.2 CUBRID Parameters$CUBRID/conf/cubrid.conf
Environment Configuration File - $CUBRID/conf/cubrid.confcubrid.conf
A file in which the value of CUBRIDsystem parameters are saved.
The file is located in a subdirectory of $CUBRID/conf . You are recommended to specify different values from one DB to another DB  in the DB.
There are two types of parameters: DB server parameters and DB client parameters. If a parameter has been changed in a process, that process must be restarted.
SQL is used to change a client parameter.
Syntax for configuring parameters
Case-insensitive
The name and value of a parameter must be inserted on the same line.
An equals sign (=) can be used, and a blank character can be added at both sides of the sign..
If the value of a parameter is a string, insertthe string without quotation marks. If a blank character is included in the string, encase it with quotation marks. [commom]data_buffer_pages=250000[demodb]data_buffer_pages=500000
Higher in priority than the configuration of cubrid.conf
Add CUBRID_ at the beginning of the parameter to configure it as an environment variable
Configuring with an SQL statement
Only client parameters can be configured
Use “;” for multiple configurationsEnvironment Configuration File- $CUBRID/conf/cubrid.confset CUBRID_SORT_BUFFER_PAGE=512SET SYSTEM PARAMETERS 'parameter_name=value [{; name=value}...]‘SET SYSTEM PARAMETERS 'csql_history_num=70’SET SYSTEM PARAMETERS 'csql_history_num=70; index_scan_in_oid_order=1'
Memory Related Configurationsdata_buffer_pages
The number of data pages cached to the memory by a DB server
Requires an amount of memory equivalent to  num_data_buffers times database page size (the page size specified when the DB is initialized; default is 4KB). (The size of the required memory is 100MB if the default is 25,000)
The actual size of a DB, the size of the memory, and the number and size of other processes must be considered when determining the size
The larger the value, the more data needs to be cached to the memory, which means less disk I/O. However, a value that is too large will cause the full swapping of page buffers.

More Related Content

PPTX
CUBRID Inside - Architecture, Source & Management Components
PPTX
Growing in the Wild. The story by CUBRID Database Developers.
PDF
Maxscale_메뉴얼
PDF
Orchestrating Redis & K8s Operators
PDF
Building better Node.js applications on MariaDB
PDF
Amazon Aurora로 안전하게 migration 하기
PDF
Nov 2011 HUG: Blur - Lucene on Hadoop
PDF
[db tech showcase Tokyo 2014] B15: Scalability with MariaDB and MaxScale by ...
CUBRID Inside - Architecture, Source & Management Components
Growing in the Wild. The story by CUBRID Database Developers.
Maxscale_메뉴얼
Orchestrating Redis & K8s Operators
Building better Node.js applications on MariaDB
Amazon Aurora로 안전하게 migration 하기
Nov 2011 HUG: Blur - Lucene on Hadoop
[db tech showcase Tokyo 2014] B15: Scalability with MariaDB and MaxScale by ...

What's hot (20)

PDF
Scaling PostgreSQL with Skytools
PDF
Postgres connections at scale
PPTX
MongoDB Ops Manager and Kubernetes - James Broadhead
PDF
MaxScale for Effective MySQL Meetup NYC - 14.01.21
PDF
Bulk Loading into Cassandra
PDF
LibX 2.0
 
PDF
초보자를 위한 분산 캐시 이야기
PDF
What is new in PostgreSQL 14?
PDF
Practicing Continuous Deployment
ODP
Introduction to Mesos
PDF
Mysteries of the binary log
PDF
Using advanced options in MariaDB Connector/J
PDF
HadoopCon2015 Multi-Cluster Live Synchronization with Kerberos Federated Hadoop
PDF
An Introduction to Using PostgreSQL with Docker & Kubernetes
PDF
Leonid Vasilyev "Building, deploying and running production code at Dropbox"
KEY
DjangoCon 2010 Scaling Disqus
KEY
Cassandra and Rails at LA NoSQL Meetup
PPTX
Kubernetes #4 volume &amp; stateful set
ODP
MySQL HA with PaceMaker
PDF
Hadoop security
Scaling PostgreSQL with Skytools
Postgres connections at scale
MongoDB Ops Manager and Kubernetes - James Broadhead
MaxScale for Effective MySQL Meetup NYC - 14.01.21
Bulk Loading into Cassandra
LibX 2.0
 
초보자를 위한 분산 캐시 이야기
What is new in PostgreSQL 14?
Practicing Continuous Deployment
Introduction to Mesos
Mysteries of the binary log
Using advanced options in MariaDB Connector/J
HadoopCon2015 Multi-Cluster Live Synchronization with Kerberos Federated Hadoop
An Introduction to Using PostgreSQL with Docker & Kubernetes
Leonid Vasilyev "Building, deploying and running production code at Dropbox"
DjangoCon 2010 Scaling Disqus
Cassandra and Rails at LA NoSQL Meetup
Kubernetes #4 volume &amp; stateful set
MySQL HA with PaceMaker
Hadoop security
Ad

Viewers also liked (8)

PDF
CUBRID HA - Guaranteed Way to Never-Die Web Services - OSCON 2011
PPSX
Cubrid Inside 5th Session 2 Ha Implementation
PPSX
Cubrid Inside 5th Session 3 Migration
PPSX
Cubrid Inside 5th Session 4 Replication
PPTX
Installing CUBRID Database and CUBRID Manager on Windows
PPTX
Installing CUBRID on Windows
PPTX
Быстрый и простой способ шардирования MySQL с помощью CUBRID SHARD - 2013 R...
PPTX
Database Sharding the Right Way: Easy, Reliable, and Open source - HighLoad++...
CUBRID HA - Guaranteed Way to Never-Die Web Services - OSCON 2011
Cubrid Inside 5th Session 2 Ha Implementation
Cubrid Inside 5th Session 3 Migration
Cubrid Inside 5th Session 4 Replication
Installing CUBRID Database and CUBRID Manager on Windows
Installing CUBRID on Windows
Быстрый и простой способ шардирования MySQL с помощью CUBRID SHARD - 2013 R...
Database Sharding the Right Way: Easy, Reliable, and Open source - HighLoad++...
Ad

Similar to CUBRID Developer's Course (20)

PPT
Cubrid - Open Source DBMS highly optimized for Web Applications
PPT
Cubrid - open source - 27mai2010
PPTX
Growing in the wild. The story by cubrid database developers (Esen Sagynov, E...
PPTX
CUBRID DBMS presentation at Agora Open Source Conference
ODP
MySQL And Search At Craigslist
ODP
The care and feeding of a MySQL database
PPTX
CUBRID presentation at Programatica Conference 2010
PDF
Congratsyourthedbatoo
PDF
My First 100 days with a MySQL DBMS (WP)
PDF
PhpTek Ten Things to do to make your MySQL servers Happier and Healthier
PDF
My sql introduction for Bestcom
PDF
My S Q L Introduction for 1 day training
PPT
MySQL-adv.ppt
PDF
The Architecture of CUBRID
PPTX
Mongo DB
PDF
Collaborate 2012 - Administering MySQL for Oracle DBAs
PDF
Pluk2013 bodybuilding ratheesh
ODP
MySQL HA
PDF
Tx lf propercareandfeedmysql
PDF
MySQL Performance for DevOps
Cubrid - Open Source DBMS highly optimized for Web Applications
Cubrid - open source - 27mai2010
Growing in the wild. The story by cubrid database developers (Esen Sagynov, E...
CUBRID DBMS presentation at Agora Open Source Conference
MySQL And Search At Craigslist
The care and feeding of a MySQL database
CUBRID presentation at Programatica Conference 2010
Congratsyourthedbatoo
My First 100 days with a MySQL DBMS (WP)
PhpTek Ten Things to do to make your MySQL servers Happier and Healthier
My sql introduction for Bestcom
My S Q L Introduction for 1 day training
MySQL-adv.ppt
The Architecture of CUBRID
Mongo DB
Collaborate 2012 - Administering MySQL for Oracle DBAs
Pluk2013 bodybuilding ratheesh
MySQL HA
Tx lf propercareandfeedmysql
MySQL Performance for DevOps

CUBRID Developer's Course

  • 1. CUBRIDDeveloper's CourseAuthor: Bomyung OhTeam / Department: DBMS Development LabAuthor(2): Kyungsik SeoTeam / Department: DBMS Development Lab
  • 2. Comparison of the featuredevelopment speed with MySQLCUBRIDCUBRID ClusterClusterR3.2SQL Compatibility
  • 23. Who are using CUBRID Over 100,000 Downloads
  • 24. Introduction to CUBRIDOverview and Architecture of CUBRIDUsing CUBRIDIntroduction to CUBRID HA
  • 25. 1.1 Overview and Architecture of CUBRID
  • 26. What is CUBRID?IntroductionCUBRIDis a comprehensive open source relational database management system that is highly optimized for Web Applications, particularly those with Read-intensive transactions. Koreahttp://dev.naver.com/projects/cubrid/http://www.cubrid.com/online_manual/cubrid_830/index.htmhttp://www.cubrid.comhttp://devcafe.nhncorp.com/g_cubridGlobalhttp://www.cubrid.org/http://wiki.cubrid.org/index.php/CUBRID_Manuals/cubrid_2008_R3.0_manualYou Tubehttp://www.youtube.com/user/cubrid
  • 27. CUBRIDArchitecture (Simplified)A 3-tier structure that separates DB Servers from BrokersBroker : DB Server = 1 : N is possibleApplication ClientJava AppsCUBRID ManagerWASWASQuery EditorDB InterfaceJDBC driverManager port: 8001,8002JDBCJDBCconnectBroker port: 30000Middlewarecub_brokercub_autoBroker1Broker2cub_jobcub_cassend_fdconnectServer port: 1523DBServercub_mastercub_autoconnectServer port: 1523send_fdcub_jobcub_serverDB Server2DB Server1DataVolume2DataVolume1volume filelog filevolume filelog file
  • 28. CUBRIDArchitecture (Detailed)CUBRIDManagerGUICUBRIDManagerInterfaceODBCCCIPHPOLE DBPythonRubyJDBCCM ServerBrokerJobQueuingMonitoringConnectionPoolingLoggingClient LibraryNative C APIParserObjectManagerSchemaManagerTransactionManagerQueryTransformWorkspace ManagerQueryOptimizerMemory ManagerPlanGenerationCommunication ModuleServerAdminUtilityCommunication ModuleCreate, Delete, Copy, RenameTransactionManagerLogManagerLockManagerQueryManagerAccessMethodB+TreeModuleFile ManagerSystemCatalog ModuleAdd VolumeBuffer ManagerLoad /UnloadDisk ManagerBackup /RestoreActiveLogCompact /OptimizeFile BasedObjectsDataVolumeIndexVolumeTempVolumeCheck /DiagArchiveLog
  • 29. CUBRIDProcess (Detailed)JDBC driverCCI libraryAPIconnectquery &resultport listeningquery &resultFilecub_brokercubrid_broker.confforkparsedescriptor passProcesscub_cascub_casshared memorycsqlcubridcs.socubridcs.soDynamic shared libraryconnectcubridcs.sorequest &responseTCPjob queuemulti-threadport listeningparsedescriptor passparsecub_mastercub_servercubrid.confUDScubrid.somount(read/write)registerreadvolume filelog filedatabases.txtvolume filelog filecub_admincubridsa.so
  • 31. Prerequisites for InstallationDownload CUBRIDhttp://sourceforge.net/projects/cubridCheck supported platforms(Linux/Windows)uname –r rpm –qa | grepglibcInstall JRE version 1.5 or higher and set up the environment variables(CUBRID Manager)http://java.sun.com/javase/downloads/index.jspFor LinuxFor WindowsVisual C++ 2008 distribution pack installationCreate DB users(multiple instances)http://www.microsoft.com/downloads/details.aspx?displaylang=ko&FamilyID=9b2da534-3e03-4391-8a4d-074b9f2bc1bfInstall and launch CUBRID
  • 32. CUBRIDInstallation and Starting CUBRID ServiceHow to install CUBRID and start CUBRID Service in the Windows environment
  • 33. For detailed information, see the manual provided at the following link:. Run the exe file to start the installation wizard.http://www.cubrid.org/manual/gs/gs_install_windows.htm
  • 34. Starting CUBRID Service in the CUBRIDtray
  • 35. How to install CUBRID and start CUBRID Service in the Linux environment
  • 36. For detailed information, see the manual provided at the following link: http://www.cubrid.org/manual/gs/gs_install_linux.htmStarting CUBRID Service (CUBRID-related processes must be started)For detailed information, see the manual provided at the following link:http://www.cubrid.org/manual/gs/gs_must_svcstart.htmStart the CUBRID service by using the following command:% sh CUBRID-8.3.0.0312-linux.x86_64.sh% . /home1/cub_user/.cubrid.sh% cubrid service start
  • 37. DB Creation and DB StartHow to create a new DB and start itFor detailed information, see the manual provided at the following link:http://www.cubrid.org/manual/admin/admin_db_create_create.htmCreating testdb and starting it with a command
  • 38. Starting an existing DB(demodb is included in the installation of CUBRID, by default)For detailed information, see the manual provided at the following link:http://www.cubrid.org/manual/gs/gs_must_svcstart.htmCreating demodb and starting it with a command% cubridcreatedbtestdb % cubrid server start testdb % cubrid server start demodb
  • 39. CUBRID Manager - ConfigurationJava-based GUI tools JRE/JDKversion 1.6 or higher is required
  • 40. CUBRID Manager is a tool used to control the functions of servers and brokers, andto monitor and analyze logs
  • 41. CUBIRD Manager consists of the search pane to the left, the query edit pane to the right, the top menu, and the toolbarCUBRID Manager – startStart CUBRID Server Start CUBRID Manager inserthost connection information(Default manager accountID: admin / PW: admin)insert DB connection information(Default DB accountID: dba / PW: No password)Start DB ServerExecute queries
  • 42. CUBRID Manager - stopStop DB ServerDisconnect from the hostStop CUBRID Manager
  • 43. 1.3 Introduction to CUBRID HA
  • 44. Introduction to CUBRID HA Replication
  • 45. No-Automatic Fail-over  No-Automatic Sync
  • 46. HA
  • 47. Automatic Fail-over  Automatic SyncHA Configuration and Usage– DB Server RedundancyAPWeb ServerAPWeb ServerFail-backFail-overBroker #2Broker #1Automatic failoverActiveServerStandbyServerNode FailAutomatic failoverReplication
  • 48. HA Configuration and Usage– Broker RedundancyAPWeb ServerAPWeb ServerJDBC DriverCCI LibraryAutomatic failoverFail-backFail-overBroker #2Broker #1Node FailActiveServerStandbyServerReplication
  • 49. Diagram of HA Architecture (Detailed)AsyncUpdateSelectA-NodeActive Server NodeS1-NodeStandby Server NodeapplylogdbcoyplogdbapplylogdbcoyplogdbServerActiveReplicaStandbySemi-SyncSyncactive logarchive logsA-node’sactive & archive logsS1-node’sactive & archive logsactive logarchive logsReplication Log is not includedReplication Log is included#Configurations##A-Node’s log pathS1-Node’s active & archive logs = $CUBRID_DATABASES/database-name_S1-Node-hostname(ex. /home1/cubrid1/DB/tdb01_Snode1)copylogdb & applylogdberror logs = $CUBRID/log#S1-Node’s log pathA-Node’s active & archive logs = $CUBRID_DATABASES/database-name_A-Node-hostname(ex. /home1/cubrid1/DB/tdb01_Anode1)copylogdb & applylogdberror logs = $CUBRID/log#Configurations#A-node & S1-node’s <cubrid.conf>ha_mode=yesha_node_list=hagrpname@A-node:S1-nodeA-node & S1-node’s <cubrid-ha>CUBRID_USER=usernameDB_LIST=‘dbname‘broker node’s <databases.txt>dbnamevol_pathA-node:S1-nodelog_path
  • 52. CUBRIDVolume Structure* : The table is mapped to aCUBRID file.
  • 53. **: A CUBRID file can be separated to multiple CUBRID volumes.File_1File_2File_3Free_PagesVolumes
  • 57. Saves the data of an application, such as tables or records
  • 58. A record storage file, called heap, is created in a data volume
  • 60. A volume in which B+Tree indexes are saved for faster data access or queries
  • 62. A volume in which intermediate results are saved to fetch result sets that exceed the size of the memory buffer, or to execute join queries
  • 63. A temporary volume with an appropriate size must be created when creating a DB volume.
  • 64. This is a permanent volume that is used for temporary purposes, and is different from temporary volumes that are used only temporarily.
  • 66. The initial volume during DB creation, which can be used as the data, index, or temp volume.
  • 67. If the usage of the volume (data, index, or temporary) is not specified, it can be used for general purposes.DB Volume – Log VolumeLog Volumes
  • 68. The active log volume includes the most recent updates that have been applied to a database.
  • 69. Records the status of a committed, aborted, or active transaction.
  • 70. It is used to recover a DB from a storage media failure.
  • 71. When the space allocated to an active log is completely used up, the content of the active log will be copied to and stored in a new log (archive log).
  • 72. Example: demodb_lgat(active log), demodb_lgar*(arcive log)DB Volume – Control VolumeControl Information Volumes
  • 74. Includes the location information on DB volumes to be created or added
  • 75. This file cannot be manually modified, deleted, or moved.
  • 76. The name of the file is in {dbname}_vinf format.
  • 78. Records the information of the current logs and archive logs
  • 79. Records the information on a new archive log file and unnecessary archive log file.
  • 80. The name of the file is in {dbname}_lginf format-5 C:\CUBRID\databases\demodb\demodb_vinf-4 C:\CUBRID\databases\demodb\demodb_lginf-3 C:\CUBRID\databases\demodb\demodb_bkvinf-2 C:\CUBRID\databases\demodb\demodb_lgat0 C:\CUBRID\databases\demodb\demodb1 C:\CUBRID\DATABA~1\demodb\demodb_x0010COMMENT: CUBRID/LogInfo for database /CUBRID/databases/demodbACTIVE: /CUBRID/databases/demodb_lgat 5000 pagesARCHIVE: 0 /CUBRID/databases/demodb_lgar000 0 4997COMMENT: Log archive /CUBRID/databases/demodb_lgar000 is not needed any longer unless a database media crash occurs.
  • 81. DB Volume – Backup VolumeBackup Volume Information
  • 82. Records the location and backup information of a backup volume
  • 83. Located in the same path in which log files are stored.
  • 84. The name of the file is in {dbname}_bkvinf format.0 0 /Backup/demodb_bk000 0 level full backup of the first file. 0 1 /Backup/demodb_bk001 0 level full backup of the second file. 1 0 /Backup/demodb_bk100 1 level incremental backup of the first file. 2 0 /Backup/demodb_bk200 2 level incremental backup of the first file.The path information of a backupfileBackup levelinformationThe sequence number of a backup volume per level
  • 85. DB Volume – $CUBRID/conf/databases.txtdatabases.txt
  • 86. Contains the name, path, and the name of the built host of a DB.
  • 87. Records the information related to the DB that is created in the databases.txt file upon the creation of a DB.
  • 88. Saved to the path in which the $CUBRID_DATABASES environment variables are specified.
  • 89. If it does not exist in the directory specified by the environment variable, the current directory will be used instead.
  • 91. If a host name has been changed or a DB deleted by an OS command, this file must be modified as well.
  • 92. As the user must be able to modify the databases.txt file during DB creation or deletion, the user must have the privilege to write to this file. If a user without the appropriate privilege attempts to create a DB, the DB creation will fail. For this reason, a DBA should enable the user-write privilege for the directory, or create a databases.txt file in the directory of each user and configure the environment variables.demodb /CUBRID/databases/demodb hostname /CUBRID/databases/demodbDB nameDB pathHost nameDB log path
  • 93. DB Volume ManagementAn example of volume configurationdisk1disk3disk2db1db1_tempdb1_logdb1_datadb1_indexdb_backupDistributes according to usage to avoid the disk bottlenecks
  • 94. Distributes data, index, temp, and log volume so that they are separated from each other
  • 95. Avoids the disk bottlenecks and improves disk management
  • 96. Distributes volumes that can be used simultaneously
  • 97. data & log, data & index, data & temp
  • 98. Configures a volume to an appropriate size to prevent it from adding more volumes while in service
  • 99. Data, Index, Temp, Active Log: Page size and the number of pages must be considered
  • 100. Backup: Backs up with the -r option, and then deletes unnecessary archive logs2.2 CUBRID Parameters$CUBRID/conf/cubrid.conf
  • 101. Environment Configuration File - $CUBRID/conf/cubrid.confcubrid.conf
  • 102. A file in which the value of CUBRIDsystem parameters are saved.
  • 103. The file is located in a subdirectory of $CUBRID/conf . You are recommended to specify different values from one DB to another DB in the DB.
  • 104. There are two types of parameters: DB server parameters and DB client parameters. If a parameter has been changed in a process, that process must be restarted.
  • 105. SQL is used to change a client parameter.
  • 108. The name and value of a parameter must be inserted on the same line.
  • 109. An equals sign (=) can be used, and a blank character can be added at both sides of the sign..
  • 110. If the value of a parameter is a string, insertthe string without quotation marks. If a blank character is included in the string, encase it with quotation marks. [commom]data_buffer_pages=250000[demodb]data_buffer_pages=500000
  • 111. Higher in priority than the configuration of cubrid.conf
  • 112. Add CUBRID_ at the beginning of the parameter to configure it as an environment variable
  • 113. Configuring with an SQL statement
  • 114. Only client parameters can be configured
  • 115. Use “;” for multiple configurationsEnvironment Configuration File- $CUBRID/conf/cubrid.confset CUBRID_SORT_BUFFER_PAGE=512SET SYSTEM PARAMETERS 'parameter_name=value [{; name=value}...]‘SET SYSTEM PARAMETERS 'csql_history_num=70’SET SYSTEM PARAMETERS 'csql_history_num=70; index_scan_in_oid_order=1'
  • 117. The number of data pages cached to the memory by a DB server
  • 118. Requires an amount of memory equivalent to num_data_buffers times database page size (the page size specified when the DB is initialized; default is 4KB). (The size of the required memory is 100MB if the default is 25,000)
  • 119. The actual size of a DB, the size of the memory, and the number and size of other processes must be considered when determining the size
  • 120. The larger the value, the more data needs to be cached to the memory, which means less disk I/O. However, a value that is too large will cause the full swapping of page buffers.
  • 122. Configure the number of buffer pages in which the OID list is to be temporarily stored when scanning indexes
  • 123. The default value is 4, (0.05~16).Memory Related Configurationssort_buffer_pages
  • 124. The number of pages used to process queries that require sorting.
  • 125. One sort buffer is allocated to each active client request.
  • 126. The allocated memory is released upon the completion of sorting.
  • 127. A value between 16 and 500 is recommended.
  • 129. Determines the number of buffer pages that cache the temporary results of a query
  • 130. The default value is 4, and the maximum value is 20.Log Related Configurationscheckpoint_interval_in_mins, checkpoint_interval_in_npages
  • 131. Configures the interval of a checkpoint execution in min./page
  • 132. The larger the value, the more time it takes to recover a DB.
  • 134. Configures whether to keep an archive log in the event of a storage media failure
  • 135. If it is configured to the default value (yes), all active logs will be copied to and stored in an archive log when changes are made to a transaction while the active logs are full.
  • 136. Please note that any archive logs which have been created while active logs that are full will be deleted if this value is no.On Concurrency Control and Lockingisolation_level
  • 137. A parameter used to manage transaction concurrency
  • 138. It must be an integer from 1 to 6 or a character string (Default: 3)
  • 139. The larger the value of the parameter, the lower the concurrency
  • 140. SERIALIZABLE: Inaccessible until transaction is complete
  • 141. REPEATABLE: S_LOCK is maintained until the transaction is complete at SELECT
  • 142. READ UNCOMMITTED: Allows incomplete transactions to be read
  • 143. READ COMMITTED: Allows only completed transactions to be readConfigurations Related to Concurrency and Lockdeadlock_detection_interval_in_secs
  • 144. Configures the interval, in seconds, of deadlock detection for stopped transactions.
  • 145. Resolves deadlock by rolling back one of the deadlocked transactions
  • 146. The default value is 1sec.Be sure not to set the interval to a large number, as doing so will allow deadlocks remain undetected for that length of time.
  • 148. Converts to table lock if the number of row locks belonging to a table is greater than the specified value.
  • 149. The default value is 100,000.
  • 150. If this value is small, the table management overhead will be reduced, but the concurrency will be decreased.
  • 151. If this value is large, thetable management overhead is will be increased, but the concurrency will be improved.
  • 153. Specifies the waiting time of a lock
  • 154. If the lock has not been allowed within the specified period of time, the transaction is cancelled, and an error is returned.
  • 155. The default value is -1, in which case the wait time is unlimited. If it is 0, there is no wait time.Configurations Related to Query Cachesmax_plan_cache_entries
  • 156. Configures the maximum number of query plans to be cached to the memory (Default: 1,000)
  • 157. If this value is lower than 1, it will not work - it works only when the value is at least 1.
  • 158. Configures the hint so that query execution plans are created without using cache
  • 159. Use /*+ RECOMPILE +/ in queriesselect /*+ RECOMPILE */ * from record where …
  • 160. Configurations Related to Syntax and Typeblock_ddl_statement
  • 161. Limits Data Definition Language (as known as DDL)
  • 162. The default value should not be no.
  • 164. It does not execute queries if there are no WHERE clauses in an UPDATE/DELETE statement.
  • 165. The default value should not be no.
  • 167. When comparing strings, set it so that it will compare the strings by a single byte.When using Unicode, set it to Yes (for UTF-8).
  • 168. Default:noOther ParametersParameters related to communication services
  • 172. If 1523 is already in use, the parameter must be changed to another port number.
  • 175. This number represents maximum number of DB clients that can be connected to a DB server at the same time, which by extension also means the total number of concurrent transactions. (Defaultvalue:50)
  • 176. The actual number of concurrent users must be considered
  • 177. DB Server restart configuration
  • 179. Automatically restarts a DB server that has been stopped due to a failure
  • 180. The default value when restarting the DB is yes.
  • 181. In the HA, the default value is no.Other ParametersParameters related to transaction processing
  • 183. Enables the asynchronous commit function (Default value: must not be set to no)
  • 184. Returns a commit to a client before the commit log is flushed to a disk
  • 185. When a failure occurs in a DB server, all commit transactions that have not been flushed to a disk will not be able to be recovered.
  • 187. Collects commits that have occurred during the setting in a group, and executes them (Default value: no need to configure)
  • 188. Improves performance by collecting commit logs and flushing them to a disk2.3 Broker Parameters$CUBRID/conf/cubrid_broker.conf
  • 189. Broker Environment Configuration - $CUBRID/conf/cubrid_broker.confModifying environment configuration
  • 191. The file can be modified in an editor. Any changes made will be applied when the Broker restarts.
  • 192. To modify the configuration without a restart, use the following command:
  • 196. If an environment variable and its value are incorrect, an error will occur during the restart, which will prevent the restart.% broker_changer <br-name> <conf-name> <conf-value>% broker_changerbroker1sql_log onOK
  • 200. 2.4 Error Log File$CUBRID/log/$CUBRID/log/server/$CUBRID/log/broker/$CUBRID/log/broker/sql_log$CUBRID/log/broker/error_logCUBRRENT_DIRECTORY, $HOME
  • 201. Broker Log File – Connection Log$CUBRID/log/broker/Checking connection log
  • 202. The connection log is a record of the time it takes for each CAS to process a request by Broker.
  • 203. This log has the name of "<broker name>.access" and resides in a directory specified in the ACCESS_LOG of cubrid_broker.conf.1 192.168.100.201 - - 1158198049.151 1158198049.246 2008/09/14 10:40:49 ~ 2008/09/14 10:40:49 29438 - -12 192.168.100.201 - - 1158198049.401 1158198049.406 2008/09/14 10:40:49 ~ 2008/09/14 10:40:49 29438 - -1
  • 204. Broker Log File – Error Log$CUBRID/log/broker/error_logChecking error log
  • 205. Records the information about an error that has occurred while processing the request from an application client into the broker_name_app_server_num.err fileTime: 02/04/09 13:45:17.687 - SYNTAX ERROR *** ERROR CODE = -493, Tran = 1, EID = 38Syntax: Unknown class "unknown_tbl". select * from unknown_tbl
  • 206. Broker Log File – SQL Log$CUBRID/log/broker/sql_logSQL log
  • 207. The SQL log file records the SQL that an application client requests, and is saved under the name of "broker_name_app_server_num.sql.log."02/04 13:45:17.687 (38) prepare 0 insert into unique_tbl values (1)02/04 13:45:17.687 (38) prepare srv_h_id 1 02/04 13:45:17.687 (38) execute srv_h_id 1 insert into unique_tbl values (1)02/04 13:45:17.687 (38) execute error:-670 tuple 0 time 0.000, EID = 3902/04 13:45:17.687 (0) auto_rollback02/04 13:45:17.687 (0) auto_rollback 0*** 0.00002/04 13:45:17.687 (39) prepare 0 select * from unique_tbl02/04 13:45:17.687 (39) prepare srv_h_id 1 (PC)02/04 13:45:17.687 (39) execute srv_h_id 1 select * from unique_tbl02/04 13:45:17.687 (39) execute 0 tuple 1 time 0.00002/04 13:45:17.687 (0) auto_commit02/04 13:45:17.687 (0) auto_commit 0*** 0.000 The time at which the application sent the request
  • 208. (39) : The sequence number of the SQL statement group, for prepared statement pooling
  • 209. (PC) : Uses the content stored in the plan cache
  • 210. SELECT... : The SQL statement to be executed. - When pooling statements, the binding variable of the WHERE clause is displayed as ?. Execute 0 tuple 1 time 0.000 - Onerow is executed, which takes 0.000 seconds.auto_commit/auto_rollback - It signifies that the target will either be committed automatically or rolled back - The second auto_commit/auto_rollback isan error code. 0 signifies that the transaction has been completed without an error.
  • 212. Catalog InformationProvides schema information access through SQL
  • 218. Important fields: class_name, attr_name, and attr_type
  • 219. Other
  • 226. db_authCatalog Information – Checking Table InformationSearching for table information in the catalog (db_class)
  • 227. Searching for table information in the catalog (db_index)3. CUBRID SQLTypes, Operators, and FunctionsComparison of Major SQLsQuery Plans and Hints
  • 228. 3.1 Types, Operators, and Functions
  • 235. 3.2 Comparison of Major SQLs
  • 236. Cautions regarding CUBRIDSQL Does not support implicit type conversion.Cannot process quotation marks in numeric data. Does not support character sets.
  • 237. Saves and displays the character set configured in an application as it is.
  • 238. Can specify a character set via the JDBC connection url.
  • 239. Does not support multi-byte characters.
  • 240. Column sizes must be defined to allow sufficient space for multi-byte characters.
  • 241. The length or position value in a string function is processed byte by byte.
  • 242. Functions for joining DBs are not supported.
  • 243. Cannot change the column size by using the ALTER TABLE statement.
  • 244. This will be fixed in a future version.
  • 245. If the prepare statement pooling is used, only one result set can be handled per connection.
  • 246. It is recommended to open multiple connections for use. Join Query[Inner] JoinSELECT select_listFROM TABLE1T1 INNER JOIN TABLE2T2 ON T1.COL1 = T2.COL2WHERE T1.A = 'test' AND T2.B = 1;Left [Outer] JoinSELECT select_listFROM TABLE1 T1 LEFT OUTER JOIN TABLE2 T2 ON T1.COL1 = T2.COL2 AND T2.B=1WHERE T1.A = 'test';
  • 247. Pagination(LIMIT RESULT SET)ROWNUM SELECT select_list FROM TABLE1 T1WHERE T1.A = 'test' AND ROWNUM <= 100ORDER BY ORDER_COLUMN;ORDERBY_NUM()SELECT select_list FROM TABLE1 T1WHERE T1.A = 'test' ORDER BY ORDER_COLUMNFOR ORDERBY_NUM() <= 100;LIMIT (from R3.0)SELECT select_list FROM TABLE1 T1WHERE T1.A = 'test' ORDER BY ORDER_COLUMNLIMIT 1,100;
  • 248. AUTO_INCREMENT and SERIALSERIALCREATE SERIAL SERIAL_NAME START WITH 1 MAXVALUE 1000 NOCYCLE;CREATE TABLE TABLE1( seqnumINT, name VARCHAR); INSERT INTO TABLE1 VALUES (SERIAL_NAME.next_value, 'test'); //seqnum=1AUTO_INCREMENTCREATE TABLE TABLE1( seqnum INT AUTO_INCREMENT(1,1000) NOT NULL, name VARCHAR); INSERT INTO TABLE1 (name) VALUES ('test'); //seqnum=1
  • 249. INDEXCREATE INDEX on TABLE1(zipcode,lastname,address);SELECT * FROM TABLE1WHERE zipcode=1000 AND name LIKE '%test%' AND address LIKE '%seoul‘;CUBRIDinternal process: Step 1: Searches for a target in which zipcode=1000 at the index level
  • 250. Step 2: Extracts targets that satisfy the name and address conditions by accessing them at the data level.(In contrast, MySQL accesses all the data in which zipcode=1000 at the data level, and then extracts the data that satisfy the other conditions.)INDEX usage tips: Thesmaller the size of an index key, the better the performance.
  • 251. Configure an index for columns with a good distribution (narrow range), basic keys, and columns which are the connection point for a join.
  • 252. When configuring indexes, use columns that are infrequently updated.IndexDefinition and Using USING INDEX CREATE [ UNIQUE ] INDEX [ index_name ]ON table_name ( column_name[(prefix_length)] [ASC | DESC] [ {, column_name[(prefix_length)] [ASC | DESC]} ...] ) [ ; ] The UNIQUE index creates an index that is used for uniqueness constraints.
  • 253. If no index name has been specified, it will be automatically created.
  • 254. You can define an index only for the front part of a character string (Prefix Index)SELECT/UPDATE/DELETE...USING INDEX {NONE | index_name[(+)],…}; Index names are distinguished by table and are used as table_name.index_name.
  • 255. Scans indexes only when the cost of index scan specified in the USING INDEX clause is lower than the sequential scan.
  • 256. USING INDEX The index scan is executed unconditionally in the case of index_name(+).
  • 257. For USING INDEX NONE, the sequential scan is executed unconditionally.
  • 258. If more than two index names are specified behind the USING INDEX clause, the appropriate index will be selected by the optimizer.
  • 259. If more than two tables are joined, index names must be specified for all tables.IndexDefinition and Using USING INDEX - Tuning If an index column (yymm) is processed by a function in the WHERE clause,there is no index scan.IndexDefinition and Using USING INDEX - Tuning When defining an index, this configures Covering Index while checking the query plan.
  • 260. When comparing the value of an index column to NULL,there will be no index scan.Modifying query
  • 261. Create an index to be able to cover search conditions
  • 262. Create an index to be able to cover the ORDER BY sorting condition
  • 263. The index scan is not available if you perform the LIKE search by binding a dynamic parameter.SELECT * FROM tbl WERE col1 LIKE ? || '%‘ //A sequential scan occursSELECT * FROM tbl WHERE col1 LIKE 'AAA‘ || '%‘//insert a static value3.3 QueryPlans and Hints
  • 264. Query Plans and Hints Creates a query plan based on the scan methods (sscan and iscan) and the join methods (nl-join, idx-join, and m-join)Configuring the Display and Check of a Query Plan (CUBRIDManager)Display Query Plan
  • 265. An Example of Display Query Plan (sscan)SELECT * FROM athlete WHERE name='Yoo Nam-Kyu';(card, page#)sscan:A sequential scan
  • 266. card: Number of records in an expected result set
  • 267. page#: Expected number of page accesses
  • 268. sel(selectivity): Expected selectivity that satisfies search conditions(card, page#)sel
  • 269. Example of a Display Query Plan (iscan)CREATE INDEX ON athlete(name);SELECT * FROM athlete WHERE name='Yoo Nam-Kyu';iscan:An index scanExample of a Display Query Plan (nl-join)SELECT * FROMolympic, nation WHEREolympic.host_nation=nation.name;outer table: Contains a small number of records
  • 270. inner table: Contains many records and has indexesExample of a Display Query Plan (idx-join)SELECT * FROM game, athlete WHEREgame.athlete_code=athlete.code;
  • 271. Example of a Display Query Plan (m-join)SELECT/*+ USE_MERGE */ * FROM game, athlete WHEREgame.athlete_code=athlete.code;
  • 272. 4 JDBCandOther ManagementJDBC ProgrammingTransaction Management
  • 274. The SQL Type and the Java Type
  • 275. JDBCMain Interfaces Supports the JDBC 2.0 standard specifications.How to use JDBCConnect to DB by using JDBC1. Loading DriverClass.forName("cubrid.jdbc.driver.CUBRIDDriver")
  • 276. Can connect to DB when a driver is loaded2. Making the ConnectionConnection con = DriverManager.getConnection(url, “user", “passwd");
  • 277. URL style example: jdbc:CUBRID:localhost:33000:demodb::: 3. Creating a statement object Statement stmt = con.createStatement();4. Executing SQLstatementstmt.executeUpdate(“….”);
  • 278. ResultSetrs = stmt.executeQuery( “…..");Make a connectionBuild SQL statementSend SQL statementClose SQL statementClose a connectionExample of JDBCusageimport java.sql.*;class SimpleExample { public static void main(String args[]) { String url = “jdbc:CUBRID:localhost:33000:demodb:::”; try {Class.forName(“cubrid.jdbc.driver.CUBRIDDriver”); } catch (ClassNotFoundException e) {System.out.println(e.getMessage()); } try {Connection myConnection =DriverManager.getConnection(url, “user”,”passwd”);Statement myStatement = myConnection.createStatement();ResultSetrs =myStatement.executeQuery("select sysdate from db_root");myStatement.close();myConnection.close(); } catch (java.lang.Exception ex) {ex.printStackTrace(); } }}
  • 280. Send SQL statementFetch rowGet columnsYesMore columnsNoYesMore rowsNoResultSet...…Connection myConnection = DriverManager.getConnection(url,”user”,”passwd”);Statement myStatement = myConnection.createStatement();ResultSetrs = myStatement.executeQuery(“SELECT name, title, salary FROM employee”);int I = 0;while (rs.next()) { I++;String empName = rs.getString(“name”); String empTitle = rs.getString(“title”); long empSalary = rs.getLong(“salary”);System.out.println(“Employee ” + empName + ” is “ + empTitle + “ and earns $” + empSalary);}…...
  • 282. Make sure to return a DB object such as ResultSet or Statement,Connection after it is used.
  • 283. Return occurs when the close() method is called for a corresponding object.
  • 284. If AutoCommit False is used, return occurs after the transaction for a connection(Commit/Rollback) is explicitly finished.
  • 285. If you execute inner query statements, you must allocate a different connection object to each of them.
  • 286. When other transactions occur in a cycle statement that uses retrieved data
  • 287. When a transaction(Commit/Rollback) occurs for a connection object that is being used, the ResultSet being used is finished.4.3 Transaction Management
  • 288. Introduction to CUBRIDlocking protocollocking
  • 289. Lockis managed for each transaction, for tables and records
  • 290. For a record, S-lock is acquired for reading, and X-lock is acquired for writing.
  • 291. To get S-lock for a record, you must get IS-lock for the corresponding table.
  • 292. To get S-lock for a record, you must get IX-lock for the corresponding table.
  • 294. Configuring SIX-lock for a table
  • 295. When a transaction that has S-lock for a table requests X-lock
  • 297. X-lock : The time a transaction is finished (i.e., confirmation or withdrawal time)
  • 298. S-lock : REP (the time when a transaction is finished), COMMIT (the time when reading is finished), UNCOMMIT (does not request lock)Features of CUBRIDlocking protocol Configuring S-lock for a table
  • 299. When reading the schema of a corresponding table
  • 300. When reading the higher-tier or lower-tier table of a corresponding table
  • 301. When the number of records a transaction reads is greater than the lock_escalationvalue
  • 302. Configuring X-lock for a table
  • 303. When modifying a corresponding table
  • 304. When the number of records a transaction writes is greater than the lock_escalationvalueChecking locking information You can check the current locking status of the DB.
  • 305. Creates an object for lock object unit: table, record)
  • 308. Lock related configuration of a DB server
  • 309. Information of DB clients connected to a DB server
  • 310. Lock table information of an objectChecking locking information – lockdbutilityCommand: lockdb
  • 311. Shows a current snapshot of the locking status of the DB. cubrid lockdb [OPTION] database-nameOptions: -o Saves output to a filecubrid lockdbdemodbLock-related configuration of a DB serverLock Escalation at = 100000, Run Deadlock interval = 1 Number of locks that can be converted from a row rock to a table lock
  • 312. Checking locking information – lockdbutilityLock information of an objectOID = 0| 1780| 7Object type: Instance of class ( 0| 288| 6) = table_a.Total mode of holders = X_LOCK, Total mode of waiters = X_LOCK.Num holders= 1, Num blocked-holders= 0, Num waiters= 1LOCK HOLDERS:Tran_index = 2, Granted_mode = X_LOCK, Count = 2LOCK WAITERS:Tran_index = 1, Blocked_mode = X_LOCKStart_waiting_at = Wed Sep 23 12:06:06 2009 Wait_for_nsecs = -1lock target object informationNo. 2 transaction has X_LOCK for this object.No. 1 transaction is waiting to acquire X_LOCK for this object.
  • 313. Checking locking information – lockdbutilityTransaction informationTransaction (index 1, cub_cas, dba@mycom|2908)Isolation REPEATABLE CLASSES AND READ UNCOMMITTED INSTANCESState TRAN_ACTIVETimeout_period -1Transaction (index 2, cub_cas, dba@mycom|2980)Isolation REPEATABLE CLASSES AND READ UNCOMMITTED INSTANCESState TRAN_ACTIVETimeout_period -1No. 1 transaction, cub_casprocess,logging into dba, processID:2908Lock level: Guaranteeing table read, Dirty read is allowed for the recordNo.2 transaction, cub_cas process,logging into dba, processID:2980Waiting time to acquire lock, -1: no timeout
  • 314. Checking locking information – CUBRID ManagerCUBRID Manger
  • 315. Only visible to dbauserChecking locking information – CUBRID ManagerTransaction infoChecking locking information – CUBRID ManagerChecking an application that has a transaction
  • 316. For CAS, check its information in the CUBRID broker.
  • 317. Check the order of ID in a broker by using a processID.
  • 318. As the process IDs in the above example are 2908 and 2980, they correspond to ID1 and ID2 of query_editor broker.
  • 319. As 2980is occupying X_LOCK, the corresponding transaction (ID2)must be forced to stop, if necessary.
  • 320. For an application, logic change, etc. may be necessary for the application.
  • 321. For a query editor or CSQL, stop the transaction (commit/rollback). Transaction ManagementStopping a broker transaction
  • 322. Forcibly stop the corresponding transaction (rollback) by using the Killtran command% usage: cubridkilltran [OPTION] database-namevalid options: -i, --kill-transaction-index=INDEX kill transaction with transaction INDEX --kill-user-name=ID kill all transactions with user ID --kill-host-name=HOST kill all transactions with client HOST --kill-program-name=NAME kill all transactions with client program NAME -p, --dba-password=PASS password of the DBA user; will prompt if don't specify -d, --display-information display information about active transactions -f, --force kill the transaction without a prompt for verification
  • 324. CUBRID InstallationInstalling CUBRID (for Windows)Downloading and installing CUBRID.Creating demodbChecking if the CUBRID service tray has startedChecking if the CUBRID service has startedservice, process
  • 325. CUBRID InstallationCUBRID manager clientChecking if DB is createdStarting DB serverChecking if there is aJAVA related error message during startUsing the Query EditorExecuting a simple query: select * from db_class
  • 326. CUBRID InstallationStopping DB ServerStopping CUBRID serviceChecking processStarting CUBRID service
  • 327. DB creationCreating a DB that satisfies the following conditionsCreation location and size of each volumePage size: 4KbFirst volume: 5,000p, C:\CUBRID\databases\<DB name>Log volume: 100,000p, C:\CUBRID\databases\<DB name>\logData volume: 500,000p, C:\CUBRID\databases\<DB name>Index volume: 250,000p, C:\CUBRID\databases\<DB name>Temp volume: 250,000p, C:\CUBRID\databases\<DB name>
  • 328. DB creationChecking the created volumeChecking the content of databases.txt Checking the files in each directory by referring to the volume information file control volumesinformation volumeslog volumesComputer name
  • 329. Schema managementCreating a table that satisfies the following conditionsCompany table (company)Company ID (integer): primary key, company name (string)Customer table (client)CustomerID (integer): not duplicatedCustomer name, title, email, telephone no., address: Character stringcreate table company (comp_idint primary key, // company IDcomp_namevarchar(200) // company name);create table client (client_idint primary key, // customer IDcomp_idint, // company IDclient_namevarchar(20), // customer name title varchar(10), // title email varchar(100), // email phone varchar(20), // phone no. address varchar(200), // address);
  • 330. Schema managementViewing table information in a CUBRID Manager client
  • 331. Schema managementModifying a table according to the following conditionsRe-creating after deleting a primary key Changing type Title: charvarcharor varchar charAdding/changing an initial valueTitle: Specifying an initial value to ‘new staff’ and deleting italter class client drop constraint pk_client_client_idalter class client add primary key(client_id)// or (possible to assign PKname),alter class client add constraint pk_id primary key (client_id) alter class client rename attribute title as old_titlealter class client add attribute title char(20)update client set title = cast(old_title as char(20))alter class client drop attribute old_titlealter class client change title default 'new staff'alter class client change title default NULL
  • 332. Schema managementIndex ClientA customer name is unique. Add an index whose name is u_name.Title is in reverse order. Add an index whose name is idx1to sort customer names in forward direction.Searching table information by using a catalogChecking the information of a created tableTable name, column information, index informationcreate unique index u_name on client(client_name)create index idx1 on client(title desc, client_name)select * from db_classselect * from db_attribute where class_name = 'client'select * from db_index where class_name = 'client'
  • 333. Data search and manipulationInsertingdataInsert (10,’company10’), (20,’company20’) into the company table.Insert an arbitrary id,name, and the company ID whose comp_id is 20 into a client table in the insert-select format.Check inserteddata information by selecting rows from the client table.insert into company values (10, 'company10');insert into company values (10, 'company10'),(20, 'company20'); insert into company (comp_id, comp_name) values (20, 'company20');insert into client (comp_id, client_id, client_name) select comp_id, 20, 'new staff20'from company where comp_id = 20
  • 334. Data search and manipulationModifying data insert an arbitraryid and name into a client table.Check the inserted data information by searching for the client table.Change the comp_id to 10 for the data inserted in the client table.Check inserted data information by searching for the client table.insert into client (client_id, client_name) values (30, 'new staff30')update client set comp_id = 10 where client_id = 30
  • 335. Data search and manipulationData searchRetrieve the countries that achieved medals in the 1988 Olympics from the participants and their medal informationTable where participants are listed: participantMedal information table : game- Retrievemedal information of the participants in the 1988 Olympicsselect (select name from nation where code = a.nation_code), medalfrom participant a, game bwhere a.host_year = 1988 and a.nation_code = b.nation_code and a.host_year = b.host_yearselect (select name from nation where code = a.nation_code), medalfrom participant a left outer join game b on a.nation_code = b.nation_code and a.host_year = b.host_yearwhere a.host_year = 1988
  • 336. Data search and manipulationUsingindexSorting the cities that have hosted the Olympics in chronological orderTable in which the names of cities that have hosted the Olympics are listed: olympicSorting the cities that have hosted the Olympics, so that the most recent ones appear at the frontselect host_year,host_nation,host_city from olympic where host_year > '' using index pk_olympic_host_year(+)create index r_year on olympic(host_yeardesc)select host_year, host_nation, host_city from olympic where host_year > '' using index r_year(+) order by host_yeardesc
  • 337. Operatorsand functionsArithmetic/Join/Type conversion operatorsChecking how many months and days are left until ChristmasDisplaying how many hours, minutes, and seconds are left until a training session is finishedFinding out what year this is through more than two methods. Checking the date of the last day of this monthselect months_between(to_date('12/25/2008'), sysdate), '12/25/2008' - sysdate from db_rootselect to_char(t1/3600) + 'hour'+to_char(abs(mod(t1,3600)/60)) + 'minute'+ to_char(abs(mod(t1,60))) + 'second'from (select '17:00' - systime from db_root) as t(t1)select to_char(sysdate, 'yyyy') from db_rootselect extract(year from sysdate) from db_rootselect extract(day from last_day(sysdate)) from db_root
  • 338. Operators and functionsFunctionFinding an arbitrary number between 1 and 100 Rounding 3.141592653 to the nearest millionth Finding out the number of bus stops where you can catch the No. 10 busLength of the following string (‘substring xyzxxy’), position of ‘str’, extracting 6 characters from the 4th character, removing ‘xy’ from the string, replacing ‘s’ with ‘S’select mod(rand(), 100) + 1 from db_rootselect round(3.141592653, 6), trunc(3.141592653, 6) from db_rootselect count(sation_id) from bus where bus_num = '10'select length('substring xyzxxy'), instr ('substring xyzxxy', 'str'), substr('substring xyzxxy', 4, 6), rtrim('substring xyzxxy', 'xy'), replace(('substring xyzxxy', 's', 'S')from db_root
  • 339. Operators and functionsFor the Olympic medals, use 'G' for a 'gold medal,' 'S' for a 'silver medal,' and 'B' for a 'bronze medal.'Olympic medal table : gameUse '1900s' for the Olympics held in the 1900s, '2000s' for 2000s, and 'Other' for other years, and calculate the number Olympics held.Table showing Olympics years: olympicselect decode(medal, 'G'. 'gold medal', 'S', 'silver medal', 'B', 'bronze medal') from gameselect case when host_year between 1900 and 1999 then '1900s' when host_year between 2000 and 2999then '2000s' else 'other years'end as years, count(*)from olympicgroup by case when host_year between 1900 and 1999 then '1900s' when host_year between 2000 and 2999then '2000s' else 'other years'end
  • 340. Operators and functionsrownumSelectinghosting information of the 11th to 20th Olympics Olympics hosting information table: olympicSelecting 11th to 20th by sorting Olympics hosting information by year chronological order
  • 341. Modifying the above query using index hint
  • 342. Grouping by host_nation columnselect * from olympic where rownum between 11 and 20select * from olympic order by host_year for orderby_num() between 11 and 20; select * from olympic order by host_year limit 11, 20; select * from olympic where host_year > 0 and rownum between 11 and 20 using index pk_olympic_host_year(+)select host_nation from olympicwhere rownum between 11 and 20 group by host_nation
  • 343. Operators and functionsserialCreate an arbitrary serial object, get the subsequent value, and check the current value.create serial seq_noselect seq_no.next_value from db_rootselect seq_no.current_value from db_root
  • 344. Operators and functionsAuto incrementCreate a table having an auto increment column
  • 345. Insertdata to the autoincrement column
  • 346. Insert no date to the auto increment column
  • 347. Select rows and check the auto increment column values
  • 348. Delete rows and re-insertdatacreate table bbs ( id intauto_increment, title string,cntint default 0)insert into bbs(id, title) values(5, 'arbitrary inserting for auto increment')insert into bbs(title) values('auto inserting for auto increment')select * from bbsdelete from bbsinsert into bbs(title) values('auto inserting for auto increment')