Banner
Facebook Twitter
Newsfeeds
Planet MySQL
Planet MySQL - https://planet.mysql.com

  • Sysbench 1.0.20 for MySQL 8.0
    As proven again very recently by my colleague and friend Dimitri K, with the TPCC “Mystery”, sysbench made by a former colleague and friend Alexey K, is really the recommended framework to benchmark your system. I use sysbench of course to benchmark MySQL but also my OS and hardware capabilities. For those willing to benchmark MySQL 8.0 with sysbench using all supported authentication methods including of course the much more secure default one, caching_sha2_password, you need to compile sysbench using the MySQL libs. And finally for those using a rpm base OS like me, you can download these rpm files with the latest sysbench, 1.0.20, compiled with the MySQL 8.0 libraries for Fedora 32, OL7, RHEL7, CentOS7, OL8, RHEL8, CentOS8: sysbench-1.0.20-1.fc32.x86_64Download sysbench-1.0.20-1.el7.x86_64Download sysbench-1.0.20-1.el8.x86_64Download Enjoy “benchmarking” MySQL 8.0 !

  • ProxySQL Behavior in the Percona Kubernetes Operator for Percona XtraDB Cluster
    The Percona Kubernetes Operator for Percona XtraDB Cluster(PXC) comes with ProxySQL as part of the deal. And to be honest, the behavior of ProxySQL is pretty much the same as in a regular non-k8s deployment of it. So why bother to write a blog about it? Because what happens around ProxySQL in the context of the operator is actually interesting. ProxySQL is deployed on its own POD (that can be scaled as well as the PXC Pods can). Each ProxySQL Pod has its own ProxySQL Container and a sidecar container. If you are curious, you can find out which node holds the pod by running kubectl describe pod cluster1-proxysql-0 | grep Node: Node: ip-192-168-37-111.ec2.internal/192.168.37.111 Login into and ask for the running containers. You will see something like this: [root@ip-192-168-37-111 ~]# docker ps | grep -i proxysql d63c55d063c5 percona/percona-xtradb-cluster-operator "/entrypoint.sh /usr…" 2 hours ago Up 2 hours k8s_proxysql-monit_cluster1-proxys ql-0_pxc_631a2c34-5de2-4c0f-b02e-cb077df4ee13_0 d75002a3847e percona/percona-xtradb-cluster-operator "/entrypoint.sh /usr…" 2 hours ago Up 2 hours k8s_pxc-monit_cluster1-proxysql-0_ pxc_631a2c34-5de2-4c0f-b02e-cb077df4ee13_0 e34d551594a8 percona/percona-xtradb-cluster-operator "/entrypoint.sh /usr…" 2 hours ago Up 2 hours k8s_proxysql_cluster1-proxysql-0_p xc_631a2c34-5de2-4c0f-b02e-cb077df4ee13_0 Now, what’s the purpose of the sidecar container in this case? To find out if there are new PXC nodes (pods) or on the contrary, PXC pods have been removed (due to scale down) and configure ProxySQL accordingly. Adding and Removing PXC Nodes (Pods) Let’s see it in action. A regular PXC kubernetes deployment with 3 PXC pods, like this: kubectl get pod NAME READY STATUS RESTARTS AGE cluster1-proxysql-0 3/3 Running 0 106m cluster1-proxysql-1 3/3 Running 0 106m cluster1-proxysql-2 3/3 Running 0 106m cluster1-pxc-0 1/1 Running 0 131m cluster1-pxc-1 1/1 Running 0 128m cluster1-pxc-2 1/1 Running 0 129m Will have the mysql_server information as following: mysql> select hostgroup_id,hostname,status, weight from runtime_mysql_servers; +--------------+---------------------------------------------------+--------+--------+ | hostgroup_id | hostname | status | weight | +--------------+---------------------------------------------------+--------+--------+ | 11 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | +--------------+---------------------------------------------------+--------+--------+ 5 rows in set (0.00 sec) What do we have? 3 PXC pods 3 ProxySQL POD The 3 PXC pods (or nodes) registered inside ProxySQL And several host groups. What are those host groups? mysql> select * from runtime_mysql_galera_hostgroups\G *************************** 1. row *************************** writer_hostgroup: 11 backup_writer_hostgroup: 12 reader_hostgroup: 10 offline_hostgroup: 13 active: 1 max_writers: 1 writer_is_also_reader: 2 max_transactions_behind: 100 comment: NULL 1 row in set (0.01 sec) ProxySQL is using the native galera support and has defined a writer hg, a backup writer hg, and a reader hg. Looking back at the server configuration we have 1 writer, 2 readers, and those same 2 readers are also backup writers. And what are the query rules? mysql> select rule_id, username, match_digest, active, destination_hostgroup from runtime_mysql_query_rules; +---------+--------------+---------------------+--------+-----------------------+ | rule_id | username | match_digest | active | destination_hostgroup | +---------+--------------+---------------------+--------+-----------------------+ | 1 | clustercheck | ^SELECT.*FOR UPDATE | 1 | 11 | | 2 | clustercheck | ^SELECT | 1 | 10 | | 3 | monitor | ^SELECT.*FOR UPDATE | 1 | 11 | | 4 | monitor | ^SELECT | 1 | 10 | | 5 | root | ^SELECT.*FOR UPDATE | 1 | 11 | | 6 | root | ^SELECT | 1 | 10 | | 7 | xtrabackup | ^SELECT.*FOR UPDATE | 1 | 11 | | 8 | xtrabackup | ^SELECT | 1 | 10 | +---------+--------------+---------------------+--------+-----------------------+ 8 rows in set (0.00 sec) Now, let’s scale up the deployment and add 2 more PXC pods: kubectl patch pxc cluster1 --type='json' -p='[{"op": "replace", "path": "/spec/pxc/size", "value": 5 }]' And let’s check the PODs kubectl get pods NAME READY STATUS RESTARTS AGE cluster1-proxysql-0 3/3 Running 0 124m cluster1-proxysql-1 3/3 Running 0 124m cluster1-proxysql-2 3/3 Running 0 124m cluster1-pxc-0 1/1 Running 0 149m cluster1-pxc-1 1/1 Running 0 146m cluster1-pxc-2 1/1 Running 0 147m cluster1-pxc-3 1/1 Running 0 2m53s cluster1-pxc-4 1/1 Running 0 2m10s And now the transition inside ProxySQL: mysql> select hostgroup_id,hostname,status, weight from runtime_mysql_servers; +--------------+---------------------------------------------------+--------+--------+ | hostgroup_id | hostname | status | weight | +--------------+---------------------------------------------------+--------+--------+ | 11 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | +--------------+---------------------------------------------------+--------+--------+ 5 rows in set (0.00 sec) mysql> select hostgroup_id,hostname,status, weight from runtime_mysql_servers; +--------------+---------------------------------------------------+--------------+--------+ | hostgroup_id | hostname | status | weight | +--------------+---------------------------------------------------+--------------+--------+ | 11 | cluster1-pxc-3.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 13 | cluster1-pxc-4.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 11 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | OFFLINE_HARD | 1000 | +--------------+---------------------------------------------------+--------------+--------+ 9 rows in set (0.00 sec) mysql> select hostgroup_id,hostname,status, weight from runtime_mysql_servers; +--------------+---------------------------------------------------+--------+--------+ | hostgroup_id | hostname | status | weight | +--------------+---------------------------------------------------+--------+--------+ | 11 | cluster1-pxc-4.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-3.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-3.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | +--------------+---------------------------------------------------+--------+--------+ 9 rows in set (0.00 sec) What happened? The new PXC nodes were added to ProxySQL and are ready to handle the traffic. We can also see that the previous master node, which was cluster1-pxc-2.cluster1 is now assigned to the reader hg and the new master is cluster1-pxc-4.cluster1 And what about the query rules? mysql> select rule_id, username, match_digest, active, destination_hostgroup from runtime_mysql_query_rules; +---------+--------------+---------------------+--------+-----------------------+ | rule_id | username | match_digest | active | destination_hostgroup | +---------+--------------+---------------------+--------+-----------------------+ | 1 | clustercheck | ^SELECT.*FOR UPDATE | 1 | 11 | | 2 | clustercheck | ^SELECT | 1 | 10 | | 3 | monitor | ^SELECT.*FOR UPDATE | 1 | 11 | | 4 | monitor | ^SELECT | 1 | 10 | | 5 | root | ^SELECT.*FOR UPDATE | 1 | 11 | | 6 | root | ^SELECT | 1 | 10 | | 7 | xtrabackup | ^SELECT.*FOR UPDATE | 1 | 11 | | 8 | xtrabackup | ^SELECT | 1 | 10 | +---------+--------------+---------------------+--------+-----------------------+ 8 rows in set (0.00 sec) Same as before. Query rules are not modified when adding/removing PXC pods.  But what happens when rules are modified? Adding and Removing ProxySQL Query Rules In our Operator, we use the ProxySQL Native Clustering. What does that means? It means that if the user made configuration changes (via admin port) on one instance, it is automatically distributed to all members. For example, adding a rule: We use Pod 0 for adding the rule: kubectl exec -it cluster1-proxysql-0 -- mysql -h127.0.0.1 -P6032 -uproxyadmin -padmin_password Defaulting container name to proxysql. Use 'kubectl describe pod/cluster1-proxysql-0 -n pxc' to see all of the containers in this pod. mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2270 Server version: 8.0.18 (ProxySQL Admin Module) Copyright (c) 2009-2020 Percona LLC and/or its affiliates Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select rule_id, username, match_digest, active, destination_hostgroup from runtime_mysql_query_rules; Empty set (0.00 sec) mysql> INSERT INTO mysql_query_rules (active, match_digest, destination_hostgroup, cache_ttl,username) VALUES (1, "^SELECT.*table-dani.*", 10, NULL, "root"); Query OK, 1 row affected (0.00 sec) mysql> LOAD MYSQL QUERY RULES TO RUNTIME; Query OK, 0 rows affected (0.00 sec) mysql> select rule_id, username, match_digest, active, destination_hostgroup from runtime_mysql_query_rules; +---------+----------+-----------------------+--------+-----------------------+ | rule_id | username | match_digest | active | destination_hostgroup | +---------+----------+-----------------------+--------+-----------------------+ | 13 | root | ^SELECT.*table-dani.* | 1 | 10 | +---------+----------+-----------------------+--------+-----------------------+ 1 row in set (0.00 sec) And the rule is immediately replicated to the other pods, for example, POD 1: kubectl exec -it cluster1-proxysql-1 -- mysql -h127.0.0.1 -P6032 -uproxyadmin -padmin_password Defaulting container name to proxysql. Use 'kubectl describe pod/cluster1-proxysql-1 -n pxc' to see all of the containers in this pod. mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1977 Server version: 8.0.18 (ProxySQL Admin Module) Copyright (c) 2009-2020 Percona LLC and/or its affiliates Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select rule_id, username, match_digest, active, destination_hostgroup from runtime_mysql_query_rules; +---------+----------+-----------------------+--------+-----------------------+ | rule_id | username | match_digest | active | destination_hostgroup | +---------+----------+-----------------------+--------+-----------------------+ | 13 | root | ^SELECT.*table-dani.* | 1 | 10 | +---------+----------+-----------------------+--------+-----------------------+ 1 row in set (0.00 sec) In Conclusion: The Percona Kubernetes Operator for Percona XtraDB Cluster is aware of the changes in the pods Modifications are automatically made in order to dispatch traffic to the new pods Query rules are not modified ProxySQL Cluster is enabled in order to maintain all the PODs in sync

  • MySQL Day Virtual Event: 5 Sessions in 1 Day
    MySQL Day Virtual Event Join us on July 29th, 2020 (8AM - 1PM PST) for a virtual event about why and how to upgrade to MySQL 8.0.  Learn the key reasons you should upgrade to 8.0. Discover the best practices developed by our support team based on their experience working directly with customers.  Get tips and techniques from our community evangelists.  Find out why the University of California at Irvine chose to upgrade to 8.0, learn about their process, their experience, and the improvements to their application performance. Register to the event and attend the sessions of your choice.  Sessions are running on the hour so you can easily plan your agenda around your session interest.  Each session will last approximately 30-40 minutes with a 15 minute Q&A, followed by 5-15 minute break in between sessions. AGENDA: About the sessions: Best Practices Tips | Upgrading to MySQL 8.0 Upgrading to MySQL 8.0 is easy and straight forward. This webinar will go over the upgrade process, the requirements, and how to use the new MySQL shell to determine potential problems before the upgrade. You will learn the dos and don’ts. After the webinar, you will know all you need to know to upgrade smoothly and effectively to MySQL 8.0. MySQL 8.0 Through the Eyes of the MySQL Support Team  Working directly with customers, the support team answers many questions related to upgrading to MySQL 8.0. This session will cover some of the best practices developed by the support team as well as Dos and Don’ts to be considered before upgrading. Customer Insights from UC Irvine In this session, you will get a quick introduction to UCI, why they chose MySQL, the system environment, number of servers, and some details about the architecture. The session will cover the reasons UCI chose to upgrade to 8.0, the upgrade process, and the upgrade experience.  MySQL 8.0 Indexes, Histograms, and Other Ways to Speed Up Your Queries Indexes have been used for years to speed up database queries but are badly understood and often over utilized. The system overhead of maintaining indexes can become burdensome. Plus MySQL has several types of indexes like multi-valued and functional that can add great value if used properly. Histograms were introduced in MySQL 8.0 and are great for data with little churn but they need to be properly established.This webinar will cover indexes, histograms, and some other 'secret' techniques to make your MySQL instances really perform. Transforming Your Application with MySQL 8.0 This session will focus on the business benefits of upgrading to MySQL 8.0.  There are many new features that makes MySQL 8.0 a must upgrade to version from ease of use, increase productivity, new SQL features, security and performance improvements and the ability to use the same database for both SQL and NoSQL. https://www.mysql.com/news-and-events/web-seminars/upgrading-to-mysql-8-0-2020/  

  • Running MySQL on selected NUMA Node(s)
    “Running MySQL on selected NUMA node(s)” looks pretty straightforward but unfortunately it isn’t. Recently, I was faced with a situation that demanded running MySQL on 2 (out of 4) NUMA nodes. Naturally, the first thing I tried was to restrict CPU/Core set using numactl --physcpubind selecting only the said CPUs/cores from the said NUMA nodes. MySQL was configured to use innodb_numa_interleave=1 so I was expecting it to allocate memory from the said NUMA nodes only (as I restricted usage of CPU/core). Suprise-1: MySQL uses numa_all_nodes_ptr->maskp that means all the nodes are opted even though the CPU task-set is limited to 2 NUMA nodes. Some lookout pointed me to these 2 issues from Daniel Black https://github.com/mysql/mysql-server/pull/104 (5.7) https://github.com/mysql/mysql-server/pull/138 (8.0) Issue proposes to switch to a more logical numa_get_mems_allowed(). As per the documentation it should return a mask of the node that are are allowed to allocate memory for the said process. ref-from-doc: numa_get_mems_allowed() returns the mask of nodes from which the process is allowed to allocate memory in it's current cpuset context. So I decided to apply the patch and proceed. Suprise-2: Just applying patch and relying on cpu/core set didn’t helped. So I thought of trying with membind option. Suprise-3: So now the command looks like: numactl --physcpubind=<cpu-set-from-numa-node-0,1> --membind=0,1 This time I surely expected that memory would be allocated from the said NUMA nodes only but it still didn’t. Memory was allocated from all 4 nodes. Some more documentation search, suggested that for numa_all_nodes_ptr looks at mems_allowed field as mentioned below numa_all_nodes_ptr: The set of nodes to record is derived from /proc/self/status, field "Mems_allowed". The user should not alter this bitmask. and as Alexey Kopytov pointed in PR#138, numa_all_nodes_ptr and numa_get_mems_allowed reads the same mask. This tends to suggest that numa_get_mems_allowed is broken or documentation needs to be updated. Just for completeness, I also tried numctl –interleave but that too didn’t helped Fact Validation: So I decided to try this using a simple program (outside MySQL) to validate the said fact. #include <iostream> #include <numa.h> #include <numaif.h> using namespace std; int main() { cout << *numa_all_nodes_ptr->maskp << endl; cout << *numa_get_mems_allowed()->maskp << endl; } numactl --membind=0-1 ./a.out 15 15 It is pretty clear that both seem to return the same mask value when numa_get_mems_allowed should return only memory allowed nodes. Workaround: I desperately needed a solution so tried using a simple workaround of manually feeding the mask (will continue to follow up about numactl behavior with OS vendor). This approach finally worked and now I can allocate memory from selected NUMA nodes only. +const unsigned long numa_mask = 0x3; struct set_numa_interleave_t { set_numa_interleave_t() { if (srv_numa_interleave) { ib::info(ER_IB_MSG_47) << "Setting NUMA memory policy to" " MPOL_INTERLEAVE"; - if (set_mempolicy(MPOL_INTERLEAVE, numa_all_nodes_ptr->maskp, + if (set_mempolicy(MPOL_INTERLEAVE, &numa_mask, numa_all_nodes_ptr->size) != 0) { ib::warn(ER_IB_MSG_48) << "Failed to set NUMA memory" " policy to MPOL_INTERLEAVE: " @@ -1000,7 +1001,7 @@ static buf_chunk_t *buf_chunk_init( #ifdef HAVE_LIBNUMA if (srv_numa_interleave) { int st = mbind(chunk->mem, chunk->mem_size(), MPOL_INTERLEAVE, - numa_all_nodes_ptr->maskp, numa_all_nodes_ptr->size, + &numa_mask, numa_all_nodes_ptr->size, MPOL_MF_MOVE); if (st != 0) { ib::warn(ER_IB_MSG_54) << "Failed to set NUMA memory policy of" (Of-course this needs re-build from source code and not an option for binary/package user (well there is .. check following section)). But then why didn’t you used … ? Naturally, most of you may suggest that this could be avoided by toggling innodb_numa_interleave back to OFF and using membind. Of-course this approach works but this approach is slightly different because then all the memory allocated is bounded by the said restriction vs innodb_numa_interleave is applicable only during buffer pool allocation. It may serve specific purpose but may not be so called comparable. This has been on my todo list to check effect of complete interleave vs innodb_numa_interleave. Conclusion Balance distribution on NUMA node has multiple aspects including core-selection, memory allocation, thread allocation (equally on selected numa node), etc…. Lot of exciting and surprising things to explore. If you have more questions/queries do let me know. Will try to answer them.

  • MySQL Terminology Updates
    It’s been 20 years since MySQL Replication was introduced in MySQL 3.23.15 (Released in May 2000). Since then, virtually every MySQL Database deployment in production has been using Replication in order to achieve high availability, disaster recovery, read scale out and various other purposes.… Tweet Share