SlideShare a Scribd company logo
Use EXPLAIN to profile the query execution plan
 Use Slow Query Log (always have it on!)
 Don’t use DISTINCT when you have or could use GROUP BY
 Use proper data partitions

   1. For Cluster. Start thinking about Cluster *before* you need them

 Insert performance

   1. Batch INSERT and REPLACE
   2. Use LOAD DATA instead of INSERT

 LIMITm,n may not be as fast as it sounds
 Don’t use ORDER BY RAND() if you have > ~2K records
 Use SQL_NO_CACHE when you are SELECTing frequently updated data or large sets of
data
 avoid wildcards at the start of LIKE queries
 avoid correlated subqueries and in select and where clause (try to avoid in)
 configparams --
 no calculated comparisons -- isolate indexed columns
 innodb_flush_commit=0 can help slave lag
 ORDER BY and LIMIT work best with equalities and covered indexes
 isolate workloads don’t let administrative work interfere with customer performance. (ie
backups)
 use optimistic locking, not pessimistic locking. try to use shared lock, not exclusive lock.
share mode vs. FOR UPDATE
 use row-level instead of table-level locking for OLTP workloads
 Know your storage engines and what performs best for your needs, know that different
ones exist.

   1. use MERGE tables ARCHIVE tables for logs

 Optimize for data types, use consistent data types. Use PROCEDURE ANALYSE() to help
determine if you need less
 separate text/blobs from metadata, don’t put text/blobs in results if you don’t need them
 if you can, compress text/blobs
 compress static data
 don’t back up static data as often
 derived tables (subqueries in the FROM clause) can be useful for retrieving BLOBs w/out
sorting them. (self-join can speed up a query if 1st part finds the IDs and use it to fetch the
rest)
 enable and increase the query and buffer caches if appropriate
 ALTER TABLE…ORDER BY can take chronological data and re-order it by a different field
 InnoDB ALWAYS keeps the primary key as part of each index, so do not make the primary
key very large, be careful of redundant columns in an index, and this can make the query
faster
Do not duplicate indexes
 Utilize different storage engines on master/slave ie, if you need fulltext indexing on a
table.
 BLACKHOLE engine and replication is much faster than FEDERATED tables for things like
logs.
 Design sane query schemas. don’t be afraid of table joins, often they are faster than
denormalization
 Don’t use boolean flags
 Use a clever key and ORDER BY instead of MAX
 Keep the database host as clean as possible. Do you really need a windowing system on
that server?
 Utilize the strengths of the OS
 Hire a MySQL ™ Certified DBA
 Know that there are many consulting companies out there that can help, as well as
MySQL’s Professional Services.
 Config variables & tips:

   1. use one of the supplied config files
   2. key_buffer, unix cache (leave some RAM free), per-connection variables, innodb
       memory variables
   3. be aware of global vs. per-connection variables
   4. check SHOW STATUS and SHOW VARIABLES (GLOBAL|SESSION in 5.0 and up)
   5. be aware of swapping esp. with Linux, “swappiness” (bypass OS filecache for innodb
       data files, innodb_flush_method=O_DIRECT if possible (this is also OS specific))
   6. defragment tables, rebuild indexes, do table maintenance
   7. If you use innodb_flush_txn_commit=1, use a battery-backed hardware cache write
       controller
   8. more RAM is good so faster disk speed
   9. use 64-bit architectures

 Know when to split a complex query and join smaller ones
 Debugging sucks, testing rocks!
 Delete small amounts at a time if you can
 Archive old data -- don’t be a pack-rat! 2 common engines for this are ARCHIVE tables and
MERGE tables
 use INET_ATON and INET_NTOA for IP addresses, not char or varchar
 make it a habit to REVERSE() email addresses, so you can easily search domains
 --skip-name-resolve
 increasemyisam_sort_buffer_size to optimize large inserts (this is a per-connection
variable)
 look up memory tuning parameter for on-insert caching
 increase temp table size in a data warehousing environment (default is 32Mb) so it doesn’t
write to disk (also constrained by max_heap_table_size, default 16Mb)
 Normalize first, and denormalize where appropriate.
 Databases are not spreadsheets, even though Access really really looks like one. Then
again, Access isn’t a real database
 In 5.1 BOOL/BIT NOT NULL type is 1 bit, in previous versions it’s 1 byte.
A NULL data type can take more room to store than NOT NULL
 Choose appropriate character sets & collations -- UTF16 will store each character in 2
bytes, whether it needs it or not, latin1 is faster than UTF8.
 make similar queries consistent so cache is used
 Don’t use deprecated features
 Use Triggers wisely
 Run in SQL_MODE=STRICT to help identify warnings
 Turning OR on multiple index fields (<5.0) into UNION may speed things up (with LIMIT),
after 5.0 the index_merge should pick stuff up.
 /tmpdir on battery-backed write cache
 consider battery-backed RAM for innodblogfiles
 usemin_rows and max_rows to specify approximate data size so space can be pre-
allocated and reference points can be calculated.
 as your data grows, indexing may change (cardinality and selectivity change). Structuring
may want to change. Make your schema as modular as your code. Make your code able to
scale. Plan and embrace change, and get developers to do the same.
 pare down cron scripts
 create a test environment
 try out a few schemas and storage engines in your test environment before picking one.
 Use HASH indexing for indexing across columns with similar data prefixes
 Usemyisam_pack_keys for int data
 Don’t use COUNT * on Innodb tables for every search, do it a few times and/or summary
tables, or if you need it for the total # of rows, use SQL_CALC_FOUND_ROWS and SELECT
FOUND_ROWS()
 use --safe-updates for client
 Redundant data is redundant
 Use INSERT … ON DUPLICATE KEY update (INSERT IGNORE) to avoid having to SELECT
 usegroupwise maximum instead of subqueries
 be able to change your schema without ruining functionality of your code
 source control schema and config files
 for LVM innodb backups, restore to a different instance of MySQL so Innodb can roll
forward
 usemulti_query if appropriate to reduce round-trips
 partition appropriately
 partition your database when you have real data
 segregate tables/databases that benefit from different configuration variables

More Related Content

DOC
Setting And Changing The Column Collation.Doc
PPTX
View, Store Procedure & Function and Trigger in MySQL - Thaipt
PDF
Sql views, stored procedure, functions
PDF
How to Fine-Tune Performance Using Amazon Redshift
DOCX
Db2 Important questions to read
PPTX
Big file tablespaces
PPTX
MySQL Replication Evolution -- Confoo Montreal 2017
PDF
Setting And Changing The Column Collation.Doc
View, Store Procedure & Function and Trigger in MySQL - Thaipt
Sql views, stored procedure, functions
How to Fine-Tune Performance Using Amazon Redshift
Db2 Important questions to read
Big file tablespaces
MySQL Replication Evolution -- Confoo Montreal 2017

What's hot (18)

PPTX
SKILLWISE-DB2 DBA
PDF
MySQL Overview
PPT
Dwh lecture slides-week5&6
PDF
PostgreSQL Advanced Queries
PPTX
MDF and LDF in SQL Server
PPTX
TSQL in SQL Server 2012
PPT
Myth busters - performance tuning 102 2008
PPTX
Unit 5-lecture-3
PDF
Microsoft SQL Server - Files and Filegroups
PDF
Sql introduction
PPTX
The design and implementation of modern column oriented databases
PPT
Dwh lecture 08-denormalization tech
RTF
Database Administrator interview questions and answers
PPTX
Getting Started with MySQL II
PPTX
SQL Server 2008 Development for Programmers
PDF
MySQL 8 Server Optimization Swanseacon 2018
PPTX
Starburst by Umar danjuma maiwada presentation
PPTX
MySql:Basics
SKILLWISE-DB2 DBA
MySQL Overview
Dwh lecture slides-week5&6
PostgreSQL Advanced Queries
MDF and LDF in SQL Server
TSQL in SQL Server 2012
Myth busters - performance tuning 102 2008
Unit 5-lecture-3
Microsoft SQL Server - Files and Filegroups
Sql introduction
The design and implementation of modern column oriented databases
Dwh lecture 08-denormalization tech
Database Administrator interview questions and answers
Getting Started with MySQL II
SQL Server 2008 Development for Programmers
MySQL 8 Server Optimization Swanseacon 2018
Starburst by Umar danjuma maiwada presentation
MySql:Basics
Ad

Viewers also liked (16)

DOC
Everything is fine with slide share
PDF
Confira na íntegra a decisão que afastou Eduardo Cunha da Câmara
PPT
Power point de la naturaleza
PPT
PPTX
Testing the result of side share
PPTX
Acelacare rock health app final
PPT
Caribbean social stratification_race
PDF
Lista dos políticos favorecidos pela Odebrecht
PDF
Lei do Carnaval de Olinda - 5306/2001
PPTX
Patient powered care
PDF
POLYCOM REAL PRESENCE DEBUT, VIDEOCONFERENCIA PARA PyMES
PPTX
Analysing texts
TXT
Resultado do vestibular da Unicap 20161
PDF
SSA3 - Confira o resultado do vestibular da UPE
PPT
PPTX
Elle Magazine Cover Analysis
Everything is fine with slide share
Confira na íntegra a decisão que afastou Eduardo Cunha da Câmara
Power point de la naturaleza
Testing the result of side share
Acelacare rock health app final
Caribbean social stratification_race
Lista dos políticos favorecidos pela Odebrecht
Lei do Carnaval de Olinda - 5306/2001
Patient powered care
POLYCOM REAL PRESENCE DEBUT, VIDEOCONFERENCIA PARA PyMES
Analysing texts
Resultado do vestibular da Unicap 20161
SSA3 - Confira o resultado do vestibular da UPE
Elle Magazine Cover Analysis
Ad

Similar to Mohan Testing (20)

PPT
15 Ways to Kill Your Mysql Application Performance
ODP
Mysql For Developers
PDF
Basics on SQL queries
PPTX
Perl and Elasticsearch
PDF
White paper on Spool space in teradata
PDF
DB2 LUW V11.1 CERTIFICATION TRAINING PART #1
PDF
Sql coding-standard-sqlserver
PPT
Implementing the Databese Server session 02
PPT
The thinking persons guide to data warehouse design
PDF
HANDLING SORT OPERATION IN Oracle SQL Overview
PPT
SQL Server 2008 Performance Enhancements
PPTX
Stored procedure tunning
PDF
Cómo se diseña una base de datos que pueda ingerir más de cuatro millones de ...
PPTX
Database Performance Tuning
PDF
Ingesting Over Four Million Rows Per Second With QuestDB Timeseries Database ...
PDF
Elasticsearch for Logs & Metrics - a deep dive
PPT
jacobs_tuuri_performance
PDF
Db2 performance tuning for dummies
PPTX
Large scale sql server best practices
PDF
Database development coding standards
15 Ways to Kill Your Mysql Application Performance
Mysql For Developers
Basics on SQL queries
Perl and Elasticsearch
White paper on Spool space in teradata
DB2 LUW V11.1 CERTIFICATION TRAINING PART #1
Sql coding-standard-sqlserver
Implementing the Databese Server session 02
The thinking persons guide to data warehouse design
HANDLING SORT OPERATION IN Oracle SQL Overview
SQL Server 2008 Performance Enhancements
Stored procedure tunning
Cómo se diseña una base de datos que pueda ingerir más de cuatro millones de ...
Database Performance Tuning
Ingesting Over Four Million Rows Per Second With QuestDB Timeseries Database ...
Elasticsearch for Logs & Metrics - a deep dive
jacobs_tuuri_performance
Db2 performance tuning for dummies
Large scale sql server best practices
Database development coding standards

More from smittal81 (9)

DOC
Mohan Resume
PPT
Mohan has completed the slide share implementation
PPT
Mohan has completed the slide share implementation
PPT
Mohan has completed the slide share implementation
PPT
Mohan has completed the slide share implementation
PPT
My attempt to post slide on slide share via API
PPT
My cool new Slideshow!
PPT
My cool new Slideshow!
PPT
My cool new Slideshow!
Mohan Resume
Mohan has completed the slide share implementation
Mohan has completed the slide share implementation
Mohan has completed the slide share implementation
Mohan has completed the slide share implementation
My attempt to post slide on slide share via API
My cool new Slideshow!
My cool new Slideshow!
My cool new Slideshow!

Mohan Testing

  • 1. Use EXPLAIN to profile the query execution plan Use Slow Query Log (always have it on!) Don’t use DISTINCT when you have or could use GROUP BY Use proper data partitions 1. For Cluster. Start thinking about Cluster *before* you need them Insert performance 1. Batch INSERT and REPLACE 2. Use LOAD DATA instead of INSERT LIMITm,n may not be as fast as it sounds Don’t use ORDER BY RAND() if you have > ~2K records Use SQL_NO_CACHE when you are SELECTing frequently updated data or large sets of data avoid wildcards at the start of LIKE queries avoid correlated subqueries and in select and where clause (try to avoid in) configparams -- no calculated comparisons -- isolate indexed columns innodb_flush_commit=0 can help slave lag ORDER BY and LIMIT work best with equalities and covered indexes isolate workloads don’t let administrative work interfere with customer performance. (ie backups) use optimistic locking, not pessimistic locking. try to use shared lock, not exclusive lock. share mode vs. FOR UPDATE use row-level instead of table-level locking for OLTP workloads Know your storage engines and what performs best for your needs, know that different ones exist. 1. use MERGE tables ARCHIVE tables for logs Optimize for data types, use consistent data types. Use PROCEDURE ANALYSE() to help determine if you need less separate text/blobs from metadata, don’t put text/blobs in results if you don’t need them if you can, compress text/blobs compress static data don’t back up static data as often derived tables (subqueries in the FROM clause) can be useful for retrieving BLOBs w/out sorting them. (self-join can speed up a query if 1st part finds the IDs and use it to fetch the rest) enable and increase the query and buffer caches if appropriate ALTER TABLE…ORDER BY can take chronological data and re-order it by a different field InnoDB ALWAYS keeps the primary key as part of each index, so do not make the primary key very large, be careful of redundant columns in an index, and this can make the query faster
  • 2. Do not duplicate indexes Utilize different storage engines on master/slave ie, if you need fulltext indexing on a table. BLACKHOLE engine and replication is much faster than FEDERATED tables for things like logs. Design sane query schemas. don’t be afraid of table joins, often they are faster than denormalization Don’t use boolean flags Use a clever key and ORDER BY instead of MAX Keep the database host as clean as possible. Do you really need a windowing system on that server? Utilize the strengths of the OS Hire a MySQL ™ Certified DBA Know that there are many consulting companies out there that can help, as well as MySQL’s Professional Services. Config variables & tips: 1. use one of the supplied config files 2. key_buffer, unix cache (leave some RAM free), per-connection variables, innodb memory variables 3. be aware of global vs. per-connection variables 4. check SHOW STATUS and SHOW VARIABLES (GLOBAL|SESSION in 5.0 and up) 5. be aware of swapping esp. with Linux, “swappiness” (bypass OS filecache for innodb data files, innodb_flush_method=O_DIRECT if possible (this is also OS specific)) 6. defragment tables, rebuild indexes, do table maintenance 7. If you use innodb_flush_txn_commit=1, use a battery-backed hardware cache write controller 8. more RAM is good so faster disk speed 9. use 64-bit architectures Know when to split a complex query and join smaller ones Debugging sucks, testing rocks! Delete small amounts at a time if you can Archive old data -- don’t be a pack-rat! 2 common engines for this are ARCHIVE tables and MERGE tables use INET_ATON and INET_NTOA for IP addresses, not char or varchar make it a habit to REVERSE() email addresses, so you can easily search domains --skip-name-resolve increasemyisam_sort_buffer_size to optimize large inserts (this is a per-connection variable) look up memory tuning parameter for on-insert caching increase temp table size in a data warehousing environment (default is 32Mb) so it doesn’t write to disk (also constrained by max_heap_table_size, default 16Mb) Normalize first, and denormalize where appropriate. Databases are not spreadsheets, even though Access really really looks like one. Then again, Access isn’t a real database In 5.1 BOOL/BIT NOT NULL type is 1 bit, in previous versions it’s 1 byte.
  • 3. A NULL data type can take more room to store than NOT NULL Choose appropriate character sets & collations -- UTF16 will store each character in 2 bytes, whether it needs it or not, latin1 is faster than UTF8. make similar queries consistent so cache is used Don’t use deprecated features Use Triggers wisely Run in SQL_MODE=STRICT to help identify warnings Turning OR on multiple index fields (<5.0) into UNION may speed things up (with LIMIT), after 5.0 the index_merge should pick stuff up. /tmpdir on battery-backed write cache consider battery-backed RAM for innodblogfiles usemin_rows and max_rows to specify approximate data size so space can be pre- allocated and reference points can be calculated. as your data grows, indexing may change (cardinality and selectivity change). Structuring may want to change. Make your schema as modular as your code. Make your code able to scale. Plan and embrace change, and get developers to do the same. pare down cron scripts create a test environment try out a few schemas and storage engines in your test environment before picking one. Use HASH indexing for indexing across columns with similar data prefixes Usemyisam_pack_keys for int data Don’t use COUNT * on Innodb tables for every search, do it a few times and/or summary tables, or if you need it for the total # of rows, use SQL_CALC_FOUND_ROWS and SELECT FOUND_ROWS() use --safe-updates for client Redundant data is redundant Use INSERT … ON DUPLICATE KEY update (INSERT IGNORE) to avoid having to SELECT usegroupwise maximum instead of subqueries be able to change your schema without ruining functionality of your code source control schema and config files for LVM innodb backups, restore to a different instance of MySQL so Innodb can roll forward usemulti_query if appropriate to reduce round-trips partition appropriately partition your database when you have real data segregate tables/databases that benefit from different configuration variables