Centos 6.x 搭建 PXC 集群及在线添加节点

目录 MySQL, 数据库

[TOC]


Centos 6.x 搭建 PXC 集群及在线添加节点


By:liups@liups.com

1.环境介绍

1.1操作系统版本

CentOS release 6.8 (Final)

1.2 默认端口

端口 用途
3306 数据库端口
4444 SST 全量传输端口
4567 组内成员通信端口
4568 IST增量传输端口

1.3 主机信息

IP hostname 用途 base datadir
192.168.51.101 cet701 pxc01 /usr/local/mysql /data/mysql
192.168.51.102 cet702 pxc02 /usr/local/mysql /data/mysql
192.168.51.103 cet703 pxc02 /usr/local/mysql /data/mysql

2. 软件下载

2.1 关于软件包的说明

Percona provides generic tarballs with all required files and binaries for manual installation.

You can download the appropriate tarball package from https://www.percona.com/downloads/Percona-XtraDB-Cluster-57

There are multiple tarballs in the Linux – Generic section depending on the OpenSSL library available in your distribution:

  • ssl100: for Debian prior to 9 and Ubuntu prior to 14.04 versions
  • ssl101: for CentOS 6 and CentOS 7
  • ssl102: for Debian 9 and Ubuntu versions starting from 14.04

这里的环境是 CentOS7 所以选择 ssl101 的软件包。

2.2 获取二进制软件包

cd /usr/local/src
wget https://www.percona.com/downloads/Percona-XtraDB-Cluster-57/Percona-XtraDB-Cluster-5.7.25-31.35/binary/tarball/Percona-XtraDB-Cluster-5.7.25-rel28-31.35.1.Linux.x86_64.ssl101.tar.gz
wget https://www.percona.com/downloads/Percona-XtraBackup-2.4/Percona-XtraBackup-2.4.14/binary/tarball/percona-xtrabackup-2.4.14-Linux-x86_64.libgcrypt153.tar.gz

3.安装前的准备

3.1 安装依赖软件

yum install perl-IO-Socket-SSL perl-DBD-MySQL perl-Time-HiRes socat scons nc make libaio libaio-devel boost-devel rsync asio-devel readline-devel libev bison gcc gcc-c++  -y

3.2 防火墙关闭或者开放端口

service iptables stop

或者开放端口:ports 3306, 4444, 4567 and 4568

vi /etc/sysconfig/iptables

-A INPUT -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 4444 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 4567 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 4568 -j ACCEPT
service iptables reload

3.3 关闭selinux

临时关闭

setenforce 0

永久关闭:

vi /etc/sysconfig/selinux
SELINUX=disabled

确认已经关闭

[root@mydbt001 local]# getenforce
Disabled

3.4 创建MySQL 用户

groupadd mysql
useradd -r -g mysql -s /bin/false mysql

4 安装Percona-XtraDB-Cluster软件

采用二进制软件包的方式按照,直接复制软件包到 basedir,并创建软连

以下在3个节点都执行

cp /usr/local/src/Percona-XtraDB-Cluster-5.7.25-rel28-31.35.1.Linux.x86_64.ssl101.tar.gz /usr/local/

4.1 解压软件包

4.2 创建软连

cd /usr/local/
ln -s Percona-XtraDB-Cluster-5.7.25-rel28-31.35.1.Linux.x86_64.ssl101 mysql

4.3 授与basedir和datadir权限

mkdir -p /data/mysql
chown -R mysql:mysql  /data/mysql
chown -R mysql:mysql  /usr/local/mysql

5 搭建pxc环境

5.1 配置参数文件

pxc01

server-id = 1003306
log-bin = /data/mysql/mysql-bin
binlog_format=ROW
#pxc
default_storage_engine=Innodb
innodb_autoinc_lock_mode=2
wsrep_cluster_name=pxc_enmo
wsrep_cluster_address=gcomm://192.168.56.103,192.168.56.101,192.168.56.102
wsrep_node_address=192.168.56.101
wsrep_provider=/usr/local/mysql/lib/libgalera_smm.so
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=pxcuser:Password123

pxc02

server-id = 2003306
log-bin = /data/mysql/mysql-bin
binlog_format=ROW
#pxc
default_storage_engine=Innodb
innodb_autoinc_lock_mode=2
wsrep_cluster_name=pxc_enmo
wsrep_cluster_address=gcomm://192.168.56.103,192.168.56.101,192.168.56.102
wsrep_node_address=192.168.56.102
wsrep_provider=/usr/local/mysql/lib/libgalera_smm.so
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=pxcuser:Password123

pxc03

server-id = 3003306
log-bin = /data/mysql/mysql-bin
binlog_format=ROW
#pxc
default_storage_engine=Innodb
innodb_autoinc_lock_mode=2
wsrep_cluster_name=pxc_enmo
wsrep_cluster_address=gcomm://192.168.56.103,192.168.56.101,192.168.56.102
wsrep_node_address=192.168.56.103
wsrep_provider=/usr/local/mysql/lib/libgalera_smm.so
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=pxcuser:Password123

5.2 节点1初始化MySQL

cd /usr/local/mysql/bin
./mysqld --defaults-file=/etc/my.cnf --initialize --basedir=/usr/local/mysql --datadir=/data/mysql --user=mysql
[root@cet701 mysql]# cd /usr/local/mysql/bin
[root@cet701 bin]# ./mysqld --defaults-file=/etc/my.cnf --initialize --basedir=/usr/local/mysql --datadir=/data/mysql --user=mysql
cp /usr/local/mysql/support-files/mysql.server  /etc/init.d/mysql

⚠️ 通过二进制安装的pxc软件,修改 /etc/init.d/mysql 中的basedir和datadir,否则会出现以下错误

sh: wsrep_sst_xtrabackup-v2: command not found

5.3 启动第一个节点

启动第一个节点的时候需要 采用 bootstrap-pxc 选项。

[root@cet701 data]# service mysql bootstrap-pxc  
Bootstrapping PXC (Percona XtraDB Cluster)Starting MySQL (P[  确定  ]aDB Cluster).
[root@cet701 data]# ps -ef |grep mysql
root     15705     1  0 16:17 pts/2    00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --datadir=/data/mysql --pid-file=/data/mysql/mysql.pid --wsrep-new-cluster
mysql    16704 15705 17 16:17 pts/2    00:00:01 /usr/local/mysql/bin/mysqld --basedir=/usr/local/mysql --datadir=/data/mysql --plugin-dir=/usr/local/mysql/lib/mysql/plugin --user=mysql --wsrep-provider=/usr/local/mysql/lib/libgalera_smm.so --wsrep-new-cluster --log-error=/data/mysql/error.log --open-files-limit=65535 --pid-file=/data/mysql/mysql.pid --socket=/tmp/mysql.sock --port=3306 --wsrep_start_position=00000000-0000-0000-0000-000000000000:-1
root     16745  8991  0 16:17 pts/2    00:00:00 grep --color=auto mysql

5.4 修改密码

[root@cet701 bin]# cat  /data/mysql/error.log|grep password
2019-06-30T08:09:14.665946Z 1 [Note] A temporary password is generated for root@localhost: 1uZPZS1gou&y
[root@cet701 data]# mysqladmin -uroot -p password        
Enter password: 
New password: 
Confirm new password: 

新密码为:Password123

5.5 创建sst用户

CREATE USER 'pxcuser'@'localhost' IDENTIFIED BY 'Pxcuser123';
GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO'pxcuser'@'localhost';
mysql> CREATE USER 'pxcuser'@'localhost' IDENTIFIED BY 'Pxcuser123';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO'pxcuser'@'localhost';
Query OK, 0 rows affected (0.09 sec)

5.6 查看pxc状态

mysql> show status like 'wsrep%st%';
+---------------------------------+--------------------------------------+
| Variable_name                   | Value                                |
+---------------------------------+--------------------------------------+
| wsrep_local_state_uuid          | 868f27cb-9b0f-11e9-81ec-ebbaefc4a7ba |
| wsrep_last_applied              | 3                                    |
| wsrep_last_committed            | 3                                    |
| wsrep_flow_control_status       | OFF                                  |
| wsrep_cert_deps_distance        | 1.000000                             |
| wsrep_local_state               | 4                                    |
| wsrep_local_state_comment       | Synced                               |
| wsrep_ist_receive_status        |                                      |
| wsrep_ist_receive_seqno_start   | 0                                    |
| wsrep_ist_receive_seqno_current | 0                                    |
| wsrep_ist_receive_seqno_end     | 0                                    |
| wsrep_cluster_weight            | 1                                    |
| wsrep_evs_evict_list            |                                      |
| wsrep_evs_state                 | OPERATIONAL                          |
| wsrep_cluster_conf_id           | 1                                    |
| wsrep_cluster_size              | 1                                    |
| wsrep_cluster_state_uuid        | 868f27cb-9b0f-11e9-81ec-ebbaefc4a7ba |
| wsrep_cluster_status            | Primary                              |
+---------------------------------+--------------------------------------+

5.7 初始化节点2

cd /usr/local/mysql/bin
./mysqld --defaults-file=/etc/my.cnf --initialize --basedir=/usr/local/mysql --datadir=/data/mysql --user=mysql
cp /usr/local/mysql/support-files/mysql.server  /etc/init.d/mysql

⚠️ :需要同样修改 /etc/init.d/mysql 中的basedir和datadir

5.8 启动节点2

/etc/init.d/mysql start

5.9 初始化节点3

cd /usr/local/mysql/bin
./mysqld --defaults-file=/etc/my.cnf --initialize --basedir=/usr/local/mysql --datadir=/data/mysql --user = mysql
cp /usr/local/mysql/support-files/mysql.server  /etc/init.d/mysql

⚠️:需要同样修改 /etc/init.d/mysql 中的basedir和datadir

5.10 启动节点3

/etc/init.d/mysql start

5.11 查看查看pxc状态

mysql> show status like 'wsrep%'
    -> ;
+----------------------------------+--------------------------------------------------------+
| Variable_name                    | Value                                                  |
+----------------------------------+--------------------------------------------------------+
| wsrep_local_state_uuid           | b25b348e-9c06-11e9-84f5-ce5173cbcbb4                   |
| wsrep_protocol_version           | 8                                                      |
| wsrep_last_applied               | 54                                                     |
| wsrep_last_committed             | 54                                                     |
| wsrep_replicated                 | 0                                                      |
| wsrep_replicated_bytes           | 0                                                      |
| wsrep_repl_keys                  | 0                                                      |
| wsrep_repl_keys_bytes            | 0                                                      |
| wsrep_repl_data_bytes            | 0                                                      |
| wsrep_repl_other_bytes           | 0                                                      |
| wsrep_received                   | 77                                                     |
| wsrep_received_bytes             | 19487                                                  |
| wsrep_local_commits              | 0                                                      |
| wsrep_local_cert_failures        | 0                                                      |
| wsrep_local_replays              | 0                                                      |
| wsrep_local_send_queue           | 0                                                      |
| wsrep_local_send_queue_max       | 1                                                      |
| wsrep_local_send_queue_min       | 0                                                      |
| wsrep_local_send_queue_avg       | 0.000000                                               |
| wsrep_local_recv_queue           | 0                                                      |
| wsrep_local_recv_queue_max       | 2                                                      |
| wsrep_local_recv_queue_min       | 0                                                      |
| wsrep_local_recv_queue_avg       | 0.012987                                               |
| wsrep_local_cached_downto        | 4                                                      |
| wsrep_flow_control_paused_ns     | 0                                                      |
| wsrep_flow_control_paused        | 0.000000                                               |
| wsrep_flow_control_sent          | 0                                                      |
| wsrep_flow_control_recv          | 0                                                      |
| wsrep_flow_control_interval      | [ 173, 173 ]                                           |
| wsrep_flow_control_interval_low  | 173                                                    |
| wsrep_flow_control_interval_high | 173                                                    |
| wsrep_flow_control_status        | OFF                                                    |
| wsrep_cert_deps_distance         | 15.882353                                              |
| wsrep_apply_oooe                 | 0.000000                                               |
| wsrep_apply_oool                 | 0.000000                                               |
| wsrep_apply_window               | 1.000000                                               |
| wsrep_commit_oooe                | 0.000000                                               |
| wsrep_commit_oool                | 0.000000                                               |
| wsrep_commit_window              | 1.000000                                               |
| wsrep_local_state                | 4                                                      |
| wsrep_local_state_comment        | Synced                                                 |
| wsrep_cert_index_size            | 6                                                      |
| wsrep_cert_bucket_count          | 58                                                     |
| wsrep_gcache_pool_size           | 18312                                                  |
| wsrep_causal_reads               | 0                                                      |
| wsrep_cert_interval              | 0.000000                                               |
| wsrep_ist_receive_status         |                                                        |
| wsrep_ist_receive_seqno_start    | 0                                                      |
| wsrep_ist_receive_seqno_current  | 0                                                      |
| wsrep_ist_receive_seqno_end      | 0                                                      |
| wsrep_incoming_addresses       | 192.168.56.101:3306,192.168.56.102:3306,192.168.56.103:3306|
| wsrep_desync_count               | 0                                                      |
| wsrep_evs_delayed                |                                                        |
| wsrep_evs_evict_list             |                                                        |
| wsrep_evs_repl_latency           | 0/0/0/0/0                                              |
| wsrep_evs_state                  | OPERATIONAL                                            |
| wsrep_gcomm_uuid                 | eaa2ae94-9c61-11e9-8403-1a605ed6ff1d                   |
| wsrep_cluster_conf_id            | 21                                                     |
| wsrep_cluster_size               | 3                                                      |
| wsrep_cluster_state_uuid         | b25b348e-9c06-11e9-84f5-ce5173cbcbb4                   |
| wsrep_cluster_status             | Primary                                                |
| wsrep_connected                  | ON                                                     |
| wsrep_local_bf_aborts            | 0                                                      |
| wsrep_local_index                | 1                                                      |
| wsrep_provider_name              | Galera                                                 |
| wsrep_provider_vendor            | Codership Oy <info@codership.com>                      |
| wsrep_provider_version           | 3.26(r)                                                |
| wsrep_ready                      | ON                                                     |
+----------------------------------+--------------------------------------------------------+
68 rows in set (0.00 sec)

6 添加节点

6.1 环境说明

源pxc集群环境

IP hostname 用途 base wsrep_cluster_name
10.30.105.73 mydbt001 pxc01 /usr/local/mysql pxc_enmo
10.30.104.245 mydbt002 pxc02 /usr/local/mysql pxc_enmo

新增节点环境

IP hostname 用途 base wsrep_cluster_name
10.31.91.204 fengshuo pxc03 /usr/local/mysql pxc_enmo

添加节点支持全量和增量,本次使用增量的方式,先搭建一个slave,然后将slave加入到pxc集群。

6.2 参数配置

在pxc原集群节点上修改参数

在 pxc01和pxc02节点上配置参数,并修改参数文件。此参数可以在线修改,立即生效,不需要重启数据库。

 set global wsrep_cluster_address='gcomm://10.30.105.74,10.30.104.245,10.31.91.204';
mysql> set global wsrep_cluster_address='gcomm://10.30.105.74,10.30.104.245,10.31.91.204';
Query OK, 0 rows affected (8.00 sec)

mysql> show variables like '%wsrep_cluster_address%'
    -> ;
+-----------------------+-------------------------------------------------+
| Variable_name         | Value                                           |
+-----------------------+-------------------------------------------------+
| wsrep_cluster_address | gcomm://10.30.105.74,10.30.104.245,10.31.91.204 |
+-----------------------+-------------------------------------------------+
1 row in set (0.00 sec)
wsrep_cluster_address=gcomm://10.30.105.74,10.30.104.245,10.31.91.204

6.3 搭建SLAVE

搭建slave的时候,模拟数据库有数据插入。

在添加节点之前创建一些临时数据。

mysql> show create table e;
+-------+------------------------------------------------------------------------------------------------------+
| Table | Create Table                                                                                         |
+-------+------------------------------------------------------------------------------------------------------+
| e     | CREATE TABLE `e` (
  `i` datetime NOT NULL,
  PRIMARY KEY (`i`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
+-------+------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql>

通过xtrabackup 备份数据库,在备份的时候模拟pxc集群有插入数据。


    | 2019-07-02 09:42:42 |
    +---------------------+
    23 rows in set (0.00 sec)

    mysql> insert into e values(now());
    Query OK, 1 row affected (0.00 sec)

    mysql> insert into e values(now());
    Query OK, 1 row affected (0.00 sec)

    mysql> insert into e values(now());
    Query OK, 1 row affected (0.00 sec)

    mysql> insert into e values(now());
    ERROR 1062 (23000): Duplicate entry '2019-07-02 09:42:57' for key 'PRIMARY'
    mysql> insert into e values(now());
    Query OK, 1 row affected (0.00 sec)

    mysql> insert into e values(now());
    Query OK, 1 row affected (0.02 sec)

    mysql> insert into e values(now());
    Query OK, 1 row affected (0.00 sec)

    mysql> insert into e values(now());
    ERROR 1062 (23000): Duplicate entry '2019-07-02 09:43:00' for key 'PRIMARY'
    mysql> insert into e values(now());
    Query OK, 1 row affected (0.00 sec)

    mysql> insert into e values(now());
    Query OK, 1 row affected (0.01 sec)

    mysql> insert into e values(now());
    Query OK, 1 row affected (0.00 sec)

    mysql> insert into e values(now());
    Query OK, 1 row affected (0.00 sec)

6.3.1 xtrabackup进行备份

备份开始时间是 190702 09:42:52

[root@mydbt001 pxc]# xtrabackup --defaults-file=/etc/my.cnf -uroot -pPassword123 --backup --target-dir=/pxc/
xtrabackup: recognized server arguments: --datadir=/data/mysql --open_files_limit=65535 --server-id=1003306 --log_bin=/data/mysql/mysql-bin --innodb_buffer_pool_size=100M --innodb_data_file_path=ibdata1:1024M:autoextend --innodb_flush_log_at_trx_commit=2 --innodb_log_buffer_size=64M --innodb_log_file_size=256M --innodb_log_files_in_group=3 --innodb_max_dirty_pages_pct=90 --innodb_file_per_table=1 --innodb_io_capacity=2000 --innodb_flush_method=O_DIRECT
xtrabackup: recognized client arguments: --datadir=/data/mysql --open_files_limit=65535 --server-id=1003306 --log_bin=/data/mysql/mysql-bin --innodb_buffer_pool_size=100M --innodb_data_file_path=ibdata1:1024M:autoextend --innodb_flush_log_at_trx_commit=2 --innodb_log_buffer_size=64M --innodb_log_file_size=256M --innodb_log_files_in_group=3 --innodb_max_dirty_pages_pct=90 --innodb_file_per_table=1 --innodb_io_capacity=2000 --innodb_flush_method=O_DIRECT --port=3306 --socket=/tmp/mysql.sock --user=root --password=* --backup=1 --target-dir=/pxc/
190702 09:42:52  version_check Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup;port=3306;mysql_socket=/tmp/mysql.sock' as 'root'  (using password: YES).
190702 09:42:52  version_check Connected to MySQL server
190702 09:42:52  version_check Executing a version check against the server...
190702 09:42:52  version_check Done.
190702 09:42:52 Connecting to MySQL server host: localhost, user: root, password: set, port: 3306, socket: /tmp/mysql.sock
Using server version 5.7.22-22-29.26-log
xtrabackup version 2.4.14 based on MySQL server 5.7.19 Linux (x86_64) (revision id: ef675d4)
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /data/mysql
xtrabackup: open files limit requested 65535, set to 65535
xtrabackup: using the following InnoDB configuration:
xtrabackup:   innodb_data_home_dir = .
xtrabackup:   innodb_data_file_path = ibdata1:1024M:autoextend
xtrabackup:   innodb_log_group_home_dir = ./
xtrabackup:   innodb_log_files_in_group = 3
xtrabackup:   innodb_log_file_size = 268435456
xtrabackup: using O_DIRECT
InnoDB: Number of pools: 1
190702 09:42:52 >> log scanned up to (2638564)
xtrabackup: Generating a list of tablespaces
InnoDB: Allocated tablespace ID 2 for mysql/plugin, old maximum was 0
190702 09:42:52 [01] Copying ./ibdata1 to /pxc/ibdata1
190702 09:42:53 >> log scanned up to (2638564)

……

备份完成

xtrabackup: The latest check point (for incremental): '2647383'
xtrabackup: Stopping log copying thread.
.190702 09:43:11 >> log scanned up to (2647392)

190702 09:43:11 Executing UNLOCK TABLES
190702 09:43:11 All tables unlocked
190702 09:43:11 [00] Copying ib_buffer_pool to /pxc/ib_buffer_pool
190702 09:43:11 [00]        ...done
190702 09:43:11 Backup created in directory '/pxc/'
190702 09:43:11 [00] Writing /pxc/backup-my.cnf
190702 09:43:11 [00]        ...done
190702 09:43:11 [00] Writing /pxc/xtrabackup_info
190702 09:43:11 [00]        ...done
xtrabackup: Transaction log of lsn (2638555) to (2647392) was copied.
190702 09:43:11 completed OK!
[root@mydbt001 pxc]#

备份完成时间是 190702 09:43:43

6.3.2 传输备份到目标主机

scp -r pxc 10.31.91.204:/

6.3.3 目标节点 prepare 数据

[root@fengshuo local]# xtrabackup  --defaults-file=/etc/my.cnf --prepare target-dir=/pxc/
xtrabackup: recognized server arguments: --datadir=/data/mysql --open_files_limit=65535 --server-id=1003306 --log_bin=/data/mysql/mysql-bin --innodb_buffer_pool_size=100M --innodb_data_file_path=ibdata1:1024M:autoextend --innodb_flush_log_at_trx_commit=2 --innodb_log_buffer_size=64M --innodb_log_file_size=256M --innodb_log_files_in_group=3 --innodb_max_dirty_pages_pct=90 --innodb_file_per_table=1 --innodb_io_capacity=2000 --innodb_flush_method=O_DIRECT
xtrabackup: recognized client arguments: --datadir=/data/mysql --open_files_limit=65535 --server-id=1003306 --log_bin=/data/mysql/mysql-bin --innodb_buffer_pool_size=100M --innodb_data_file_path=ibdata1:1024M:autoextend --innodb_flush_log_at_trx_commit=2 --innodb_log_buffer_size=64M --innodb_log_file_size=256M --innodb_log_files_in_group=3 --innodb_max_dirty_pages_pct=90 --innodb_file_per_table=1 --innodb_io_capacity=2000 --innodb_flush_method=O_DIRECT --port=3306 --socket=/tmp/mysql.sock --prepare=1
xtrabackup: Error: unknown argument: 'target-dir=/pxc/'
[root@fengshuo local]# xtrabackup  --defaults-file=/etc/my.cnf --prepare --target-dir=/pxc/
xtrabackup: recognized server arguments: --innodb_checksum_algorithm=crc32 --innodb_log_checksum_algorithm=strict_crc32 --innodb_data_file_path=ibdata1:1024M:autoextend --innodb_log_files_in_group=3 --innodb_log_file_size=268435456 --innodb_fast_checksum=0 --innodb_page_size=16384 --innodb_log_block_size=512 --innodb_undo_directory=./ --innodb_undo_tablespaces=0 --server-id=1003306 --redo-log-version=1
xtrabackup: recognized client arguments: --innodb_checksum_algorithm=crc32 --innodb_log_checksum_algorithm=strict_crc32 --innodb_data_file_path=ibdata1:1024M:autoextend --innodb_log_files_in_group=3 --innodb_log_file_size=268435456 --innodb_fast_checksum=0 --innodb_page_size=16384 --innodb_log_block_size=512 --innodb_undo_directory=./ --innodb_undo_tablespaces=0 --server-id=1003306 --redo-log-version=1 --prepare=1 --target-dir=/pxc/
xtrabackup version 2.4.12 based on MySQL server 5.7.19 Linux (x86_64) (revision id: 170eb8c)
xtrabackup: cd to /pxc/
xtrabackup: This target seems to be not prepared yet.
InnoDB: Number of pools: 1
xtrabackup: xtrabackup_logfile detected: size=8388608, start_lsn=(2638555)
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup:   innodb_data_home_dir = .
xtrabackup:   innodb_data_file_path = ibdata1:1024M:autoextend
xtrabackup:   innodb_log_group_home_dir = .

……

InnoDB: File './ibtmp1' size is now 12 MB.
InnoDB: 96 redo rollback segment(s) found. 1 redo rollback segment(s) are active.
InnoDB: 32 non-redo rollback segment(s) are active.
InnoDB: Waiting for purge to start
InnoDB: page_cleaner: 1000ms intended loop took 7793ms. The settings might not be optimal. (flushed=0 and evicted=0, during the time.)
InnoDB: 5.7.19 started; log sequence number 2648597
xtrabackup: starting shutdown with innodb_fast_shutdown = 1
InnoDB: FTS optimize thread exiting.
InnoDB: Starting shutdown...
InnoDB: Shutdown completed; log sequence number 2648616
190702 10:06:08 completed OK!

6.4 安装 pxc 软件

采用二进制软件包的方式按照,直接复制软件包到 basedir,并创建软连。

cp /usr/local/src/Percona-XtraDB-Cluster-5.7.25-rel28-31.35.1.Linux.x86_64.ssl101.tar.gz /
/usr/local/
ln -s Percona-XtraDB-Cluster-5.7.25-rel28-31.35.1.Linux.x86_64.ssl101 mysql

6.4.1 授与basedir和datadir权限

mkdir -p /data/mysql
chown -R mysql:mysql  /data/mysql
chown -R mysql:mysql  /usr/local/mysql

6.4.2 将prepare后的数据文件复制到datadir

mv /pxc/* /data/mysql/
chown -R mysql:mysql /data/mysql

6.4.3 授权并启动数据库服务

[root@fengshuo data]# chown  -R mysql:mysql mysql
[root@fengshuo data]# /etc/init.d/mysql  start
Starting MySQL (Percona XtraDB Cluster).Logging to '/data/mysql/error.log'.
. SUCCESS!

登录数据库,查看在备份期间的数据 09:42到9:43 已经存在。

| 2019-07-02 09:42:59 |
| 2019-07-02 09:43:00 |
| 2019-07-02 09:43:01 |
| 2019-07-02 09:43:05 |
| 2019-07-02 09:43:06 |
| 2019-07-02 09:43:07 |
+---------------------+
33 rows in set (0.01 sec)

6.5 配置slave

6.5.1 找post点

[root@fengshuo mysql]# cat xtrabackup_binlog_info
mysql-bin.000002    10747

6.5.2 创建复制用户

mysql> create user repl@'%' identified by 'Repl1234';
Query OK, 0 rows affected (0.04 sec)
 grant replication slave, replication client on *.* to repl@'%';

6.5.3 change master

mysql>  CHANGE MASTER TO MASTER_HOST='10.30.105.74',MASTER_USER='repl',MASTER_PASSWORD='Repl1234',MASTER_LOG_FILE='mysql-bin.000002',MASTER_LOG_POS=10747;
Query OK, 0 rows affected, 2 warnings (0.04 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G

6.5.4 查看复制状态

   Slave_IO_State: Waiting for master to send event
                  Master_Host: 10.30.105.74
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000002
          Read_Master_Log_Pos: 11208
               Relay_Log_File: fengshuo-relay-bin.000003
                Relay_Log_Pos: 781
        Relay_Master_Log_File: mysql-bin.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:

随便插入一些数据。

| 2019-07-02 09:43:07 |
| 2019-07-02 10:21:25 |
| 2019-07-02 10:21:27 |
| 2019-07-02 10:21:28 |
| 2019-07-02 10:21:29 |
| 2019-07-02 10:21:31 |
| 2019-07-02 10:21:32 |
+---------------------+
39 rows in set (0.00 sec)

6.6 slave 节点转换成pxc节点

slave节点通过 show slave status 找到 Exec_Master_Log_Pos,并stop slave。

          Master_Log_File: mysql-bin.000002
          Exec_Master_Log_Pos: 12798

6.6.1 在pxc01节点上通过Exec_Master_Log_Pos 找到 Xid

[root@mydbt001 pxc]# mysqlbinlog -vv /data/mysql/mysql-bin.000002 |grep 12798
#190702 10:21:32 server id 1003306  end_log_pos 12798 CRC32 0x5be4942f  Xid = 50
# at 12798
[root@mydbt001 pxc]#

6.6.2 pxc01节点添加参数

#pxc
default_storage_engine=Innodb
innodb_autoinc_lock_mode=2
wsrep_cluster_name=pxc_enmo
wsrep_cluster_address=gcomm://10.30.105.74,10.30.104.245,10.31.91.204
wsrep_node_address=10.31.91.204
wsrep_provider=/usr/local/mysql/lib/libgalera_smm.so
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="pxcuser:Pxcuser123"

6.6.3 创建grastate.dat文件

将 pxc01节点的grastate.dat拷贝新节点,修改seqno为上面对象的 Xid值即 50

# GALERA saved state
version: 2.1
uuid:    b25b348e-9c06-11e9-84f5-ce5173cbcbb4
seqno:   50
safe_to_bootstrap: 0

6.6.4 启动新节点

2019-07-03T12:27:34.666497Z 0 [Warning] WSREP: Fail to access the file (/data/mysql//gvwstate.dat) error (No such file or directory). It is possible if node is booting for first time or re-booting after a graceful shutdown
2019-07-03T12:27:34.669345Z 0 [Warning] WSREP: (ef903445, 'tcp://0.0.0.0:4567') address 'tcp://10.31.91.204:4567' points to own listening address, blacklisting
2019-07-03T12:27:35.669742Z 1 [Warning] WSREP: Gap in state sequence. Need state transfer.
        2019-07-03T12:27:36.832833Z WSREP_SST: [INFO] xtrabackup_ist received from donor: Running IST
        2019-07-03T12:27:36.834756Z WSREP_SST: [INFO] Galera co-ords from recovery: b25b348e-9c06-11e9-84f5-ce5173cbcbb4:50
2019-07-03T12:27:37.405886Z 0 [Warning] CA certificate ca.pem is self signed.
2019-07-03T12:27:37.458437Z 0 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=fengshuo-relay-bin' to avoid this problem.
2019-07-03T12:27:37.470574Z 0 [Warning] Recovery from master pos 12798 and file mysql-bin.000002 for channel ''. Previous relay log pos and relay log file had been set to 320, ./fengshuo-relay-bin.000005 respectively.

加入集群成功

6.6.5 检查pxc集群状态

mysql> show status like '%wsrep_cluster%'
    -> ;
+--------------------------+--------------------------------------+
| Variable_name            | Value                                |
+--------------------------+--------------------------------------+
| wsrep_cluster_conf_id    | 21                                   |
| wsrep_cluster_size       | 3                                    |
| wsrep_cluster_state_uuid | b25b348e-9c06-11e9-84f5-ce5173cbcbb4 |
| wsrep_cluster_status     | Primary                              |
+--------------------------+--------------------------------------+
4 rows in set (0.00 sec)

mysql>

其状态为 Primary。

6.6.6 检查grastate.dat

[root@fengshuo mysql]# cat grastate.dat
# GALERA saved state
version: 2.1
uuid:    b25b348e-9c06-11e9-84f5-ce5173cbcbb4
seqno:   -1
safe_to_bootstrap: 0

seqno 的值变为-1.

6.6.7 清除 SLAVE 信息

检查slave的状态,仍然有slave的信息。

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State:
                  Master_Host: 10.30.105.74
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000002
          Read_Master_Log_Pos: 12798
               Relay_Log_File: fengshuo-relay-bin.000006
                Relay_Log_Pos: 4
        Relay_Master_Log_File: mysql-bin.000002
             Slave_IO_Running: No
            Slave_SQL_Running: No
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 12798
              Relay_Log_Space: 1767
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 0
                  Master_UUID: 8a79a560-9c06-11e9-b48f-00163e064c24
             Master_Info_File: /u01/data/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State:
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
         Replicate_Rewrite_DB:
                 Channel_Name:
           Master_TLS_Version:
1 row in set (0.00 sec)

这里手动测试start slave,会报主键冲突,因为数据增量已经同步过来了。

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 10.30.105.74
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000002
          Read_Master_Log_Pos: 13858
               Relay_Log_File: fengshuo-relay-bin.000007
                Relay_Log_Pos: 320
        Relay_Master_Log_File: mysql-bin.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: No
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 1062
                   Last_Error: Could not execute Write_rows event on table pxc.e; Duplicate entry '2019-07-02 10:38:45' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.000002, end_log_pos 13032
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 12798
              Relay_Log_Space: 1590
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 1062
               Last_SQL_Error: Could not execute Write_rows event on table pxc.e; Duplicate entry '2019-07-02 10:38:45' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.000002, end_log_pos 13032
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 1003306
                  Master_UUID: 8a79a560-9c06-11e9-b48f-00163e064c24
             Master_Info_File: /u01/data/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State:
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp: 190704 05:17:32
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
         Replicate_Rewrite_DB:
                 Channel_Name:
           Master_TLS_Version:
1 row in set (0.00 sec)

可以看到 Exec_Master_Log_Pos: 12798 仍然是 在 12798的位置。所以slave的信息已经没有用处了,进行slave信息清除。

mysql> stop slave;
Query OK, 0 rows affected (0.00 sec)

mysql> reset slave all;
Query OK, 0 rows affected (0.01 sec)

mysql> show slave status\G
Empty set (0.00 sec)

Bug 14588746 ORA-600 [kjbmprlst:shadow] in LMS in RAC – crashes the instance

目录 ORACLE, 数据库

Bug 14588746 – ORA-600 [kjbmprlst:shadow] in LMS in RAC – crashes the instance (文档 ID 14588746.8)

Bug 14588746 ORA-600 [kjbmprlst:shadow] in LMS in RAC – crashes the instance

This note gives a brief overview of bug 14588746.
The content was last updated on: 11-SEP-2016
Click here for details of each of the sections below.
Affects:

Product (Component) Oracle Server (Rdbms)
Range of versions believed to be affected Versions BELOW 12.1
Versions confirmed as being affected
11.2.0.3
11.2.0.2
11.2.0.1
Platforms affected Generic (all / most platforms affected)
Fixed:

The fix for 14588746 is first included in
12.1.0.1 (Base Release)
11.2.0.4 (Server Patch Set)
11.2.0.3.11 Database Patch Set Update
11.2.0.3 Bundle Patch 23 for Exadata Database
11.2.0.3 Patch 29 on Windows Platforms

Interim patches may be available for earlier versions – click here to check.
Symptoms:

Related To:

Instance May Crash
Internal Error May Occur (ORA-600)
ORA-600 [kjbmprlst:shadow]
RAC (Real Application Clusters) / OPS
_gc_policy_time
_gc_undo_affinity
Description

This bug is only relevant when using Real Application Clusters (RAC)
A RAC database instance may crash with the following error in an
LMS process:
ORA-600 [kjbmprlst:shadow]

and an alert message like:
LMS1 (ospid: 21692502): terminating the instance due to error 484
..
Instance terminated by LMS1, pid = 21692502

The call stack is likely to look like this:
.. kjbmprlst

Workaround
Disable DRM by setting:
_gc_policy_time=0
_gc_undo_affinity=false

Note:
It was previously mentioned that setting _gcs_disable_skip_close_remastering=true
provided a workaround for this issue but that was incorrect. The bug can occur
regardless of the value of that parameter.

Please note: The above is a summary description only. Actual symptoms can vary. Matching to any symptoms here does not confirm that you are encountering this problem. For questions about this bug please consult Oracle Support.
References

Bug:14588746 (This link will only work for PUBLISHED bugs)
Note:245840.1 Information on the sections in this article

ORACLE 12c 12.1.0.2.0 ORA-12850: Could not allocate slaves on all specified instances: 2 needed, 0 allocated

目录 ORACLE12C

DB version:oracle 12c 12.1.0.2.0

Errors in file /u01/app/oracle/diag/rdbms/tos/LOCDB1/trace/LOCDB1_m001_28575.trc:
ORA-12850: Could not allocate slaves on all specified instances: 2 needed, 0 allocated
Tue Jan 16 06:12:07 2018

High CPU Usage and/or Frequent Occurrences of ORA-12850 For Monitor Queries by MMON From 12.1 (Doc ID 2102131.1)
In this Document
Symptoms Changes Cause Solution References
APPLIES TO:
Oracle Database – Enterprise Edition – Version 12.1.0.1 and later Information in this document applies to any platform.
SYMPTOMS
MMON consumes more CPU due to the monitoring activity in 12.1
High CPU consumption and time spent while executing monitor-related queries from both MMON_SLAVE like following:

The frequent occurrences of ORA-12850 may be present in the Alert.log in both a RAC and NON-RAC environment. The failing queries are consistently executed against GV$SQL_MONITOR:
CHANGES
Upgrading the database to 12.1.
CAUSE

WITH MONITOR_DATA AS (SELECT INST_ID, KEY, NVL2(PX_QCSID, NULL, STATUS)STATUS, FIRST_REFRESH_TIME, LAST_REFRESH_TIME, REFRESH_COUNT, PROCESS_NAME, SID, SQL_ID,
SQL_EXEC_START, SQL_EXEC_ID, DBOP_NAME, DBOP_EXEC_ID, SQL_PLAN_HASH_VALUE, SESSION_SERIAL#, SQL_TEXT, IS_FULL_SQLTEXT, PX_SERVER#, PX_SERVER_GROUP, PX_SERVER_SET,
PX_QCINST_ID, PX_QCSID, CASE WHEN ELAPSED_TIME < (CPU_TIME+ APPLICATION_WAIT_TIME+ CONCURRENCY_WAIT_TIME+ CL ...; 

Thu Sep 08 04:00:41 2016 Errors in file /app/oracle/diag/rdbms/dbname/dbinstance/trace/dbinstance_m002_14490.trc: ORA-12850: Could not allocate slaves on all specified instances: 3 needed, 2 allocated There is a new feature in 12C called “Automatic Report Capturing Feature”. As part of this feature, some monitoring SQLs are executed by MMON_SLAVE to identify the resource-intensive SQLs and generate the SQL Monitoring report automatically for those SQLs. Those SQLs consume little more CPU and it is expected behavior being a new feature. Such monitoring queries can be identified from (G)V$SQLSTATS. However, If the CPU consumption is significantly high then it is not an expected behavior and could be due to optimizer choosing suboptimal plan for the SQL statements. This can happen due to Adaptive Optimization, a new feature in 12c.

SOLUTION
1. The new feature can be disabled to reduce the CPU consumption: Or 2. Kill MMON SLAVE from os. The sid and serial number can be obtained from ASH report. Document 2031605.1 Adaptive Query Optimization

SQL> alter system set "_report_capture_cycle_time"=0; /* Default is 60 seconds */

NOTE: _report_capture_cycle_time=0 /* This is system modifiable with immediate */

TIP: There are some known bugs for the problem which are fixed in latest PSU of 12.1.0.2. Hence, it is strongly recommended to apply the latest PSU for 12.1.0.2.
Document 1924126.1 12.1.0.2 Patch Set Updates – List of Fixes in each PSU
REFERENCES

 

下载pdf:Document 2102131.1

mysqldump 单表备份引发的性能问题分析

目录 MySQL, 数据库

在通过mysqldump进行MySQL数据库(innodb引擎)逻辑备份的时候经常使用的参数组合就是 –single-transaction –master-data=2,今天的案例就是来分析下通过–single-transaction –master-data=2参数组合进行单表备份而引发的性能问题。

  • 问题描述:
  • 某业务系统(数据库采用的MySQL数据库)上午10点左右部分业务业务反应系统缓慢,用户登陆出现超时的现象。

  • 问题分析与处理
  • 登录数据库服务器查看top,iostat –x 1等查看操作系统负载。
    top发现:CPU:load average 超过10且是mysqld进程占用,确定是MySQL等待。
    通过show processlist查看:发现大量的Waiting for global read lock 。(信息已经脱敏处理)

    lock
    这里第一感觉是正在进行mysqldump备份数据,然后show processlit查看数据库的进程信息,果然发现了backup 用户正在 执行 FLUSH TABLES WITH READ LOCK 。

    back
    通过操作系统 ps –ef|grep mysqldump 查看,发现mysqldump进程。

    mysqldump

    mysqldump进程是root发起的,仔细查看mysqldump 进程的pid是不同的,且dbname和tablename也是不同的,很显然这是一个 for循环在mysqldump备份单个表。
    切换到root用户,查看定时任务通过crontab –l发现有一个每天上午 4点30分开始执行的mbak.sh脚本,查看脚本是mysqldump备份数据库,通过分析备份脚本发现这个脚本的逻辑是这样的:先整库备份一次使用的是–all-database参数,然后分别备份每个数据库为一个备份文件,最后是单表备份一次,即一个表备份成一个文件。

    部分脚本节选如下:
    所有的数据库备份一个文件的脚本

    cd ${BACKDIR}/alldb_${TIME}
    $MYSQLDUMP --default-character-set=utf8 -R --hex-blob --single-transaction --master-data=2  --all-databases --log-error=alldb_${TIME}.err >alldb.${TIME}.sql
    echo "`date +"%F %T"` BACKUP all  DATABASE  one file FINISH" |tee -a $logF
    

    每个库一个备份文件的脚本

    echo "DB HOST  $IP IS BACKUP DATABASE per database one file " |tee -a $logF
    for dbname in `$MYSQL -B -N -e "show databases"|grep -vE "information_schema|performance_schema"`
    do
    mkdir -p ${BACKDIR}/${dbname}_${TIME}
    cd ${BACKDIR}/${dbname}_${TIME}
    $MYSQLDUMP --default-character-set=utf8 -R --hex-blob --single-transaction --master-data=2  $dbname --log-error=${dbname}_${TIME}.err >${dbname}.${TIME}.sql
    echo "`date +"%F %T"` BACKUP $dbname  one fie finish" |tee -a $logF
    done
    

    每个表一个备份文件的脚本

    ### per table per database
    echo "DB host $IP is backing" |tee -a $logF
    for dbname in `$MYSQL -B -N -e "show databases"|grep -vE "information_schema|performance_schema"`
    do
    mkdir -p ${BACKDIR}/${dbname}_per_${TIME}
    cd ${BACKDIR}/${dbname}_per_${TIME}
    for table in `$MYSQL $dbname -e "show tables"|sed '1d'|grep -vE "yxxxxxxx"`
    do
    echo "`date +"%F %T"` begin backup  ${dbname}.${table}  TABLE" |tee -a $logF
    $MYSQLDUMP --default-character-set=utf8 -R --hex-blob --single-transaction  --master-data=2  $dbname $table --log-error=${dbname}_${TIME}.err >$dbname_$table.sql
    echo "`date +"%F %T"` BACKUP ${dbname}.$table FINISH" |tee -a $logF
    done
    echo "`date +"%F %T"` BACKUP $dbname DATABASE FINISH" |tee -a $logF
    done
    

    很显然出问题的时候是在备份单个表,通过mbak.sh脚本的逻辑来看,是先全库备份,全库完成再单库备份,单库备份完成之后再单表备份。现在卡在单表备份的  FLUSH TABLES WITH READ LOCK,这是一个全库级别的锁,单表备份为什么会锁整个库呢?仔细查看上面的mysqldump备份命令,可以发现每次mysqldump都添加了 –single-transaction –master-data=2,这是问题的关键,下面通过开启general log来分析问题。
    开启general_log。

    root@hostname 13:14:  [test]> show variables like '%gen%';
    +------------------+------------------------------------------------+
    | Variable_name    | Value                                          |
    +------------------+------------------------------------------------+
    | general_log      | ON                                             |
    | general_log_file | /usr/local/mysql/data/BertLiudeMacBook-Pro.log |
    +------------------+------------------------------------------------+
    

    备份test 库下面的 t表,使用–single-transaction –master-data=2 参数

    mysqldump  -uroot -ppassword  --default-character-set=utf8 --single-transaction --master-data=2   test t    >test_t.sql
    

    查看general log:

    2017-08-28T15:14:15.960100Z     106 Connect    root@localhost on  using Socket
    2017-08-28T15:14:15.960860Z     106 Query        /*!40100 SET @@SQL_MODE='' */
    2017-08-28T15:14:15.962606Z     106 Query        /*!40103 SET TIME_ZONE='+00:00' */
    2017-08-28T15:14:15.962793Z     106 Query        FLUSH /*!40101 LOCAL */ TABLES
    2017-08-28T15:14:15.962887Z     106 Query        FLUSH TABLES WITH READ LOCK
    2017-08-28T15:14:15.962943Z     106 Query        SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ
    2017-08-28T15:14:15.962997Z     106 Query        START TRANSACTION /*!40100 WITH CONSISTENT SNAPSHOT */
    2017-08-28T15:14:15.963094Z     106 Query        SHOW VARIABLES LIKE 'gtid\_mode'
    2017-08-28T15:14:15.965326Z     106 Query        SELECT @@GLOBAL.GTID_EXECUTED\
    2017-08-28T15:14:15.965458Z     106 Query        SHOW MASTER STATUS
    2017-08-28T15:14:15.965554Z     106 Query        UNLOCK TABLES
    …… …...
    2017-08-28T15:14:15.970349Z     106 Init DB       test
    2017-08-28T15:14:15.970438Z     106 Query        SHOW TABLES LIKE 't'
    2017-08-28T15:14:15.971500Z     106 Query        SAVEPOINT sp
    2017-08-28T15:14:15.971790Z     106 Query        show table status like 't'
    2017-08-28T15:14:15.973180Z     106 Query        SET SQL_QUOTE_SHOW_CREATE=1
    2017-08-28T15:14:15.973483Z     106 Query        SET SESSION character_set_results = 'binary'
    2017-08-28T15:14:15.973668Z     106 Query        show create table `t`
    2017-08-28T15:14:15.973989Z     106 Query        SET SESSION character_set_results = 'utf8'
    2017-08-28T15:14:15.974788Z     106 Query        show fields from `t`
    2017-08-28T15:14:15.975367Z     106 Query        show fields from `t`
    2017-08-28T15:14:15.975627Z     106 Query        SELECT /*!40001 SQL_NO_CACHE */ * FROM `t`
    2017-08-28T15:14:15.975779Z     106 Query        SET SESSION character_set_results = 'binary'
    2017-08-28T15:14:15.975881Z     106 Query        use `test`
    2017-08-28T15:14:15.975994Z     106 Query        select @@collation_database
    2017-08-28T15:14:15.976158Z     106 Query        SHOW TRIGGERS LIKE 't'
    2017-08-28T15:14:15.976556Z     106 Query        SET SESSION character_set_results = 'utf8'
    2017-08-28T15:14:15.976611Z     106 Query        ROLLBACK TO SAVEPOINT sp
    2017-08-28T15:14:15.976652Z     106 Query        RELEASE SAVEPOINT sp
    2017-08-28T15:14:15.994479Z     106 Quit
    

    发现其执行了

    FLUSH /*!40101 LOCAL */ TABLES
    FLUSH TABLES WITH READ LOCK
    

    备份test 库下面的 t 表,不使用–single-transaction –master-data=2 参数

    mysqldump -uroot –ppassword --default-character-set=utf8 test t  >test_t.sql
    

    查看general log:

    2017-08-28T15:18:00.613324Z     109 Connect    root@localhost on  using Socket
    2017-08-28T15:18:00.614229Z     109 Query        /*!40100 SET @@SQL_MODE='' */
    2017-08-28T15:18:00.615737Z     109 Query        /*!40103 SET TIME_ZONE='+00:00' */
    2017-08-28T15:18:00.615939Z     109 Query        SHOW VARIABLES LIKE 'gtid\_mode'
    2017-08-28T15:18:00.618627Z     109 Query        SELECT @@GLOBAL.GTID_EXECUTED
    ………… 省略
    2017-08-28T15:18:00.624351Z     109 Init DB       test
    2017-08-28T15:18:00.624422Z     109 Query        SHOW TABLES LIKE 't'
    2017-08-28T15:18:00.624843Z     109 Query        LOCK TABLES `t` READ /*!32311 LOCAL */《--这里只锁定了 t 表。
    2017-08-28T15:18:00.624967Z     109 Query        show table status like 't'
    2017-08-28T15:18:00.625452Z     109 Query        SET SQL_QUOTE_SHOW_CREATE=1
    2017-08-28T15:18:00.625514Z     109 Query        SET SESSION character_set_results = 'binary'
    2017-08-28T15:18:00.625566Z     109 Query        show create table `t`
    2017-08-28T15:18:00.625624Z     109 Query        SET SESSION character_set_results = 'utf8'
    2017-08-28T15:18:00.626177Z     109 Query        show fields from `t`
    2017-08-28T15:18:00.626474Z     109 Query        show fields from `t`
    2017-08-28T15:18:00.626725Z     109 Query        SELECT /*!40001 SQL_NO_CACHE */ * FROM `t`
    2017-08-28T15:18:00.626896Z     109 Query        SET SESSION character_set_results = 'binary'
    2017-08-28T15:18:00.626978Z     109 Query        use `test`
    2017-08-28T15:18:00.627034Z     109 Query        select @@collation_database
    2017-08-28T15:18:00.627106Z     109 Query        SHOW TRIGGERS LIKE 't'
    2017-08-28T15:18:00.627399Z     109 Query        SET SESSION character_set_results = 'utf8'
    2017-08-28T15:18:00.627451Z     109 Query        UNLOCK TABLES
    2017-08-28T15:18:00.645805Z     109 Quit
    

    发现:没有执行 FLUSH /*!40101 LOCAL */ TABLES、FLUSH TABLES WITH READ LOCK 。而是执行的LOCK TABLES `t ` READ,对备份的 t 表进行一个lock。
    备份全库使用–all-databases及–single-transaction –master-data=2 参数

    mysqldump -uroot -ppassword --default-character-set=utf8  --single-transaction --master-data=2 --all-databases   >all.singel-2.sql
    查看general log:
    
    2017-08-28T15:22:22.002783Z     110 Connect    root@localhost on  using Socket
    2017-08-28T15:22:22.003739Z     110 Query        /*!40100 SET @@SQL_MODE='' */
    2017-08-28T15:22:22.005306Z     110 Query        /*!40103 SET TIME_ZONE='+00:00' */
    2017-08-28T15:22:22.005487Z     110 Query        FLUSH /*!40101 LOCAL */ TABLES
    2017-08-28T15:22:22.005606Z     110 Query        FLUSH TABLES WITH READ LOCK
    2017-08-28T15:22:22.005684Z     110 Query        SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ
    2017-08-28T15:22:22.005721Z     110 Query        START TRANSACTION /*!40100 WITH CONSISTENT SNAPSHOT */
    2017-08-28T15:22:22.005803Z     110 Query        SHOW VARIABLES LIKE 'gtid\_mode'
    2017-08-28T15:22:22.008043Z     110 Query        SELECT @@GLOBAL.GTID_EXECUTED
    2017-08-28T15:22:22.008193Z     110 Query        SHOW MASTER STATUS
    2017-08-28T15:22:22.008256Z     110 Query        UNLOCK TABLES
    ………… 省略
    2017-08-28T15:22:22.022739Z     110 Init DB       b
    2017-08-28T15:22:22.022825Z     110 Query        SHOW CREATE DATABASE IF NOT EXISTS `b`
    2017-08-28T15:22:22.022890Z     110 Query        SAVEPOINT sp
    2017-08-28T15:22:22.022956Z     110 Query        show tables
    2017-08-28T15:22:22.023106Z     110 Query        RELEASE SAVEPOINT sp
    2017-08-28T15:22:22.023157Z     110 Init DB       e
    2017-08-28T15:22:22.023200Z     110 Query        SHOW CREATE DATABASE IF NOT EXISTS `e`
    2017-08-28T15:22:22.023277Z     110 Query        SAVEPOINT sp
    2017-08-28T15:22:22.023353Z     110 Query        show tables
    

    结论:不管是全库备份还是单表备份使用了–single-transaction –master-data=2 参数会执行FLUSH /*!40101 LOCAL */ TABLES、FLUSH TABLES WITH READ LOCK 来获取 show master status的一致性。
    通过进一步测试发现mysqldump备份的时候只使用 –single-transaction 不使用–master-data=2参数是不会进行锁表的。只使用 –master-data=2参数为了获取show master status,会执行 FLUSH TABLES WITH READ LOCK  的全局锁。
    只使用 –single-transaction 备份单表的general log

    2017-08-28T15:24:59.149626Z     111 Connect    root@localhost on  using Socket
    2017-08-28T15:24:59.150404Z     111 Query        /*!40100 SET @@SQL_MODE='' */
    2017-08-28T15:24:59.151672Z     111 Query        /*!40103 SET TIME_ZONE='+00:00' */
    2017-08-28T15:24:59.151944Z     111 Query        SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ
    2017-08-28T15:24:59.152007Z     111 Query        START TRANSACTION /*!40100 WITH CONSISTENT SNAPSHOT */
    2017-08-28T15:24:59.152178Z     111 Query        SHOW VARIABLES LIKE 'gtid\_mode'
    2017-08-28T15:24:59.154385Z     111 Query        SELECT @@GLOBAL.GTID_EXECUTED
    2017-08-28T15:24:59.154538Z     111 Query        UNLOCK TABLES
    ……省略
    2017-08-28T15:24:59.160621Z     111 Init DB       test
    2017-08-28T15:24:59.160790Z     111 Query        SHOW TABLES LIKE 't'
    2017-08-28T15:24:59.161636Z     111 Query        SAVEPOINT sp
    2017-08-28T15:24:59.161832Z     111 Query        show table status like 't'
    2017-08-28T15:24:59.162886Z     111 Query        SET SQL_QUOTE_SHOW_CREATE=1
    2017-08-28T15:24:59.163046Z     111 Query        SET SESSION character_set_results = 'binary'
    2017-08-28T15:24:59.163226Z     111 Query        show create table `t`
    2017-08-28T15:24:59.163401Z     111 Query        SET SESSION character_set_results = 'utf8'
    2017-08-28T15:24:59.163994Z     111 Query        show fields from `t`
    2017-08-28T15:24:59.164420Z     111 Query        show fields from `t`
    2017-08-28T15:24:59.164831Z     111 Query        SELECT /*!40001 SQL_NO_CACHE */ * FROM `t`
    2017-08-28T15:24:59.165083Z     111 Query        SET SESSION character_set_results = 'binary'
    2017-08-28T15:24:59.165263Z     111 Query        use `test`
    2017-08-28T15:24:59.165378Z     111 Query        select @@collation_database
    2017-08-28T15:24:59.165540Z     111 Query        SHOW TRIGGERS LIKE 't'
    2017-08-28T15:24:59.167415Z     111 Query        SET SESSION character_set_results = 'utf8'
    2017-08-28T15:24:59.167585Z     111 Query        ROLLBACK TO SAVEPOINT sp
    2017-08-28T15:24:59.167707Z     111 Query        RELEASE SAVEPOINT spt
    

    结论:只使用–single-transaction 不执行任何lock table,也就是说可以进行热备份。

    只使用 –master-data=2参数备份单表的general log

    22017-08-28T15:27:20.891774Z   112 Connect    root@localhost on  using Socket
    2017-08-28T15:27:20.891932Z     112 Query        /*!40100 SET @@SQL_MODE='' */
    2017-08-28T15:27:20.892941Z     112 Query        /*!40103 SET TIME_ZONE='+00:00' */
    2017-08-28T15:27:20.893122Z     112 Query        FLUSH /*!40101 LOCAL */ TABLES
    2017-08-28T15:27:20.893404Z     112 Query        FLUSH TABLES WITH READ LOCK
    2017-08-28T15:27:20.893508Z     112 Query        SHOW VARIABLES LIKE 'gtid\_mode'
    2017-08-28T15:27:20.897696Z     112 Query        SELECT @@GLOBAL.GTID_EXECUTED
    2017-08-28T15:27:20.897990Z     112 Query        SHOW MASTER STATUS
    2017-08-28T15:27:20.906499Z     112 Init DB       test
    …………省略
    2017-08-28T15:27:20.906595Z     112 Query        SHOW TABLES LIKE 't'
    2017-08-28T15:27:20.907204Z     112 Query        show table status like 't'
    2017-08-28T15:27:20.907865Z     112 Query        SET SQL_QUOTE_SHOW_CREATE=1
    2017-08-28T15:27:20.907973Z     112 Query        SET SESSION character_set_results = 'binary'
    2017-08-28T15:27:20.908068Z     112 Query        show create table `t`
    2017-08-28T15:27:20.908182Z     112 Query        SET SESSION character_set_results = 'utf8'
    2017-08-28T15:27:20.908294Z     112 Query        show fields from `t`
    2017-08-28T15:27:20.908705Z     112 Query        show fields from `t`
    2017-08-28T15:27:20.909113Z     112 Query        SELECT /*!40001 SQL_NO_CACHE */ * FROM `t`
    2017-08-28T15:27:20.910751Z     112 Query        SET SESSION character_set_results = 'binary'
    2017-08-28T15:27:20.910859Z     112 Query        use `test`
    2017-08-28T15:27:20.910947Z     112 Query        select @@collation_database
    2017-08-28T15:27:20.911098Z     112 Query        SHOW TRIGGERS LIKE 't'
    2017-08-28T15:27:20.911609Z     112 Query        SET SESSION character_set_results = 'utf8'
    2017-08-28T15:27:20.928296Z     112 Quit
    
  • 结论:
  • 也就是说通过这几次实验发现只要添加了–master-data=2参数就会执行 FLUSH TABLES WITH READ LOCK全局锁,即使你是备份单个表也会锁整个库来获取show master status;从而导致整个库的一个等待。在业务低峰期,获取全局锁会很快,但是由于整个库有700多个表,有部分表在上午10点也就是业务高峰期还没备份完成,由于高峰期有大量的DML操作,从而获取全局锁更加困难,Waiting for global read lock 的时间会更长,从而影响了业务的正常运行。

  • 问题解决与改善
  • 问题解决:
    找到mbak.sh 脚本的pid号,进行kill操作,数据库恢复正常。
    改善:
    调整备份策略:
    1、取消备份每个单表为一个文件,减少全局锁(经过生产环境实际测试mysqldump全库(17G数据)备份一次不到5分钟);
    2、如果有必要进行单表备份的话,禁用–master-data=2参数,只使用–single-transaction 参数;
    3、采用xtraback 物理备份替换mysqldump的逻辑备份,来进行在线热备数据库(innodb引擎)。

  • 总结:
  • 单表备份:禁用–master-data=2参数
    全库备份:–master-data=2 和 –single-transaction 组合,实现innodb的在线热备。

    附:
    官档上的 — single-transaction
    https://dev.mysql.com/doc/refman/5.7/en/mysqlpump.html#option_mysqlpump_single-transaction
    –single-transaction
    This option sets the transaction isolation mode to REPEATABLE READ and sends a START TRANSACTION SQL statement to the server before dumping data. It is useful only with transactional tables such as InnoDB, because then it dumps the consistent state of the database at the time when START TRANSACTION was issued without blocking any applications.
    When using this option, you should keep in mind that only InnoDB tables are dumped in a consistent state. For example, any MyISAM or MEMORY tables dumped while using this option may still change state.
    While a –single-transaction dump is in process, to ensure a valid dump file (correct table contents and binary log coordinates), no other connection should use the following statements: ALTER TABLECREATE TABLE,DROP TABLERENAME TABLETRUNCATE TABLE. A consistent read is not isolated from those statements, so use of them on a table to be dumped can cause the SELECT that is performed by mysqlpump to retrieve the table contents to obtain incorrect contents or fail.
    –add-locks and –single-transaction are mutually exclusive.
    官档上的 –master-data
    https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_master-data
    –master-data[=value]
    Use this option to dump a master replication server to produce a dump file that can be used to set up another server as a slave of the master. It causes the dump output to include a CHANGE MASTER TO statement that indicates the binary log coordinates (file name and position) of the dumped server. These are the master server coordinates from which the slave should start replicating after you load the dump file into the slave.
    If the option value is 2, the CHANGE MASTER TO statement is written as an SQL comment, and thus is informative only; it has no effect when the dump file is reloaded. If the option value is 1, the statement is not written as a comment and takes effect when the dump file is reloaded. If no option value is specified, the default value is 1.
    This option requires the RELOAD privilege and the binary log must be enabled.
    The –master-data option automatically turns off –lock-tables. It also turns on –lock-all-tables, unless–single-transaction also is specified, in which case, a global read lock is acquired only for a short time at the beginning of the dump (see the description for –single-transaction). In all cases, any action on logs happens at the exact moment of the dump.
    It is also possible to set up a slave by dumping an existing slave of the master, using the –dump-slave option, which overrides –master-data and causes it to be ignored if both options are used.
    官档上的 FLUSH TABLES WITH READ LOCK
    https://dev.mysql.com/doc/refman/5.7/en/flush.html#flush-tables-with-read-lock

    Closes all open tables and locks all tables for all databases with a global read lock. This is a very convenient way to get backups if you have a file system such as Veritas or ZFS that can take snapshots in time. Use UNLOCK TABLESto release the lock.
    FLUSH TABLES WITH READ LOCK acquires a global read lock rather than table locks, so it is not subject to the same behavior as LOCK TABLES and UNLOCK TABLES with respect to table locking and implicit commits:

    FLUSH TABLES WITH READ LOCK is not compatible with XA transactions.
    FLUSH TABLES WITH READ LOCK does not prevent the server from inserting rows into the log tables (see Section 5.4.1, “Selecting General Query and Slow Query Log Output Destinations”).

    针对数据泵导出 (expdp) 和导入 (impdp)工具性能降低问题的检查表 (文档 ID 1549185.1)

    目录 ORACLE, 数据库

     

    针对数据泵导出 (expdp) 和导入 (impdp)工具性能降低问题的检查表 (文档 ID 1549185.1)

    文档内容

    适用于:

    Oracle Database – Enterprise Edition – 版本 10.1.0.2 到 12.1.0.2 [发行版 10.1 到 12.1]
    本文档所含信息适用于所有平台

    用途

    本文档提供了有关使用数据泵导入导出工具传输数据时所遇到的性能相关问题的可能原因。

    适用范围

    本文的目标受众是 Oracle10g 和 Oracle11g 数据库的用户,并且使用 Export Data pump 工具从 Oracle 源数据库中导出数据,并使用 Import Data pump 工具将这些数据导入到 Oracle 目标数据库中。本文档仅适用于新的 Export Data Pump (expdp) 和 Import Data Pump (impdp) 客户端,不适用于原始的导出 (exp) 和导入 (imp) 客户端。对于 Oracle10g 及更高版本,我们建议使用数据泵在 Oracle 数据库之间传输数据。

    详细信息

    简介

    从版本 10g (10.1.0) 开始,Oracle 引入了新的 Oracle 数据泵技术,通过该项技术,用户能够以极快的速度将数据和元数据从一个数据库移动到另一个数据库。此项技术是 Oracle 新的数据移动工具(“Export Data pump”和“Import Data pump”)的基础。

    在某些情况下,使用数据泵客户端卸载或加载数据时,可能会遇到性能问题。本文档将提供有关安装和配置设置的详细信息,这些设置可能会对数据泵客户端的性能产生影响;还将提供有关如何检查数据泵在某一特定时刻正在进行哪些操作的详细信息;此外,还将讨论一些会对性能产生影响的已知缺陷。

    参数

    在此部分列出了可能会对数据泵导出或导入作业的性能产生影响的数据泵参数。此外,还列出了一些通用数据库参数 (init.ora/spfile),我们已知这些参数可能会对数据泵作业产生影响。
    如果您遇到了数据泵性能问题并需要解决它,且作业中使用了以下一个或多个参数,请先检查以下备注,并查看在不使用该参数或以不同方式使用该参数的情况下此性能问题是否重现。

    1. 数据泵参数:PARALLEL

      有关详细信息,另请参阅:
      Note:365459.1 “Parallel Capabilities of Oracle Data Pump”
      .
    2. 数据泵参数:DUMPFILE

      .
    3. Export Data Pump 参数:ESTIMATE

      有关Export Data Pump 参数 ESTIMATE 的详细信息,另请参阅:
      Note.786165.1 “Understanding the ESTIMATE and ESTIMATE_ONLY parameter in Export DataPump”
      .
    4. Export Data Pump 参数:FLASHBACK_SCN and FLASHBACK_TIME

      .
    5. Import Data Pump 参数:TABLE_EXISTS_ACTION

      .
    6. Import Data Pump 参数:REMAP_SCHEMA 或 REMAP_TABLESPACE

      与此问题相关的详细信息,另请参阅下面的“缺陷详细信息”部分,以及:
      Note:429846.1 “Slow Data Pump with REMAP_SCHEMA and REMAP_TABLESPACE parameters”
      .
    7. 数据库参数: CURSOR_SHARING

      与此问题相关的详细信息,另请参阅下面的“缺陷详细信息”部分,以及:
      Note:94036.1 “Init.ora Parameter “CURSOR_SHARING” Reference Note”
      Note:421441.1 “Datapump Import With dblink Going Slow With cursor_sharing Set to ‘force'”
      .
    8. 导出/Import Data Pump 参数:STATUS监视正在进行的数据泵作业。此状态信息仅写入到您的标准输出设备中,而不写入到日志文件中(如果存在一个有效的日志文件)。

    检查数据泵的活动

    已知缺陷概述

    下面概述了各个 Oracle10g 和 Orace11g 版本中已知的性能相关缺陷。请参阅概述之后的内容部分,以了解有关这些缺陷和可能的变通方案的详细信息。

    注意 1:除了数据泵特定的缺陷,其它组件例如与优化器相关的缺陷也会在数据泵作业期间对性能产生影响。下面仅列出了一些影响最大的缺陷。

    注意 2:使用指定的 NETWORK_LINK 参数执行导入时,影响 Export Data Pump 的缺陷也会对 Import Data Pump 产生影响。这些缺陷只在 Export Data Pump 部分列出一次。

    Export DataPump (expdp):

    10.1.0.1.0  至  10.1.0.3.0

    – Bug 3447032 – Import Data Pump is slow when importing statistics
    Bug:4513695 – Poor performance for SELECT with ROWNUM=1 with literal replacement
    – Bug 5095025 – Export Data Pump runs out of memory (ORA-4030) when exporting many schema’s
    – Bug:5464834 – Export Data Pump runs out of memory (ORA-4030) when many tables are involved
    Bug:5590185 – Consistent Export Data Pump is slow when exporting row data
    Bug:5928639 – Export Data Pump of table can be very slow if CURSOR_SHARING is not EXACT
    – Bug 5929373 – Export Data Pump of a table can be very slow if database has many user tables10.1.0.4.0   10.1.0.5.0  以及  10.2.0.1.0   10.2.0.3.0
    Bug:4513695 – Poor performance for SELECT with ROWNUM=1 with literal replacement
    – Bug 5095025 – Export Data Pump runs out of memory (ORA-4030) when exporting many schema’s
    Bug:5464834 – Export Data Pump runs out of memory (ORA-4030) when many tables are involved
    Bug:5590185 – Consistent Export Data Pump is slow when exporting row data
    Bug:5928639 – Export Data Pump of table can be very slow if CURSOR_SHARING is not EXACT
    – Bug 5929373 – Export Data Pump of a table can be very slow if database has many user tables
    Bug 5573425 – Slow Datapump with wrong results due to subquery unnesting and complex view10.2.0.4.0
    Bug 7413726 – Poor EXPDP performance when db COMPATIBLE=10.2.0.3 or 10.2.0.4 (duplicate of Bug 7710931)
    Bug 7710931 – DataPump export is extremely slow when extracting schema
    Bug 6460304 – (affects earlier versions as well) Expdp domain index dump using RULE Optimizer and slow
    Bug 7722575 -DATAPUMP VIEW KU$_NTABLE_DATA_VIEW causes poor plan / slow Expdp

    11.1.0.6.0
    Bug 7585314 – OCSSD.BIN consumes much too much CPU while running Datapump
    Bug 7722575 – DATAPUMP VIEW KU$_NTABLE_DATA_VIEW causes poor plan / slow Expdp

    11.1.0.7.0
    Bug 8363441 – Very Expensive Sql Statement During Datapump Import With Many Subpartitions
    Bug 7722575 – DATAPUMP VIEW KU$_NTABLE_DATA_VIEW causes poor plan / slow Expdp
    Bug 8904037 – LT_CTX_PKG.SCHEMA_INFO_EXP IS TAKING MORE TIME WHILE EXPORTING PROCOBJ OBJECTS

    11.2.0.1
    Bug 10178675 – expdp slow with processing functional_and_bitmap/index
    Bug 10194031 – EXPDP OF OR XML LEAKS MEMORY / RUNS SLOW 11.2.0.1 WORKS 11.1.0.7

    11.2.0.3
    – <Unpublished Bug 12780993> DATA PUMP PERFORMANCE FOR ESTIMATE=STATISTICS IN EXPORT IS BAD
    Bug 13573203 SLOW INDEX EXPORT DUE TO PERFORMANCE ISSUE WITH METADATA KU$_INDEX_COL_VIEW
    Bug 13914808 QUERY AGAINST KU$_INDEX_VIEW KU$ SLOW EVEN AFTER USING METADATA FROM 13844935
    Bug 14192178 – EXPDP of partitioned table can be slow
    Bug 14794472 – EXPDP TOO SLOW HAVING TOO MANY TABLESPACES
    Bug 16138607 – SLOW EXPDP AFTER 11.2.0.3 UPGRADE
    Bug 16298117 – TTS EXPDP TAKING 26 HOURS TO COMPLETE, MOST OF TIME PROCESSING INDEX INFO
    Bug 16856028 – EXPORT DATAPUMP SLOW ON DATAGUARD STANDBY INSTANCE
    Bug 18793246 – EXPDP slow showing base object lookup during datapump export causes full table scan per object
    Bug 20446613 – EXPORTING NON-STREAMS TABLE FROM STRADMIN SCHEMA OVER NETWORK LINK IS SLOW
    Bug 20236523 – DATAPUMP EXPORT SLOW USING CONTENT=METADATA_ONLY

    Note::
    1)
    对于11.2.0.3, patch 16038089 中包含了以下修复:
    Bug 12325243 – SLOW PERFORMANCE ON EXPDP FUNCTIONAL AND BITMAP INDEXES
    – Unpublished Bug 12780993 – DATA PUMP PERFORMANCE FOR ESTIMATE=STATISTICS IN EXPORT IS BAD
    Bug 13573203 – SLOW INDEX EXPORT DUE TO PERFORMANCE ISSUE WITH METADATA KU$_INDEX_COL_VIEW
    Bug 13844935 – QUERY AGAINST KU$_INDEX_VIEW SLOW IN 11.2.0.3
    Bug 14192178 – BUG 14006804 FIX DOES NOT RESOLVE THE PERFORMANCE ISSUE

    2)
    相对于Patch 16038089,下边两个patch是更好的选择:
    11.2.0.3 – Patch 15893700
    11.2.0.3.3或更高 – MLR Patch 14742362
    这是因为这两个patch包含了Patch 16038089中所有的修复,同时还修复了其它一些之前patch没有修复的性能问题。

    3)
    所有8个 bug 都在Patch 14742362中修复并已包含11.2.0.4补丁集中,详见:
    Note 1562142.1 – 11.2.0.4 Patch Set – List of Bug Fixes by Problem Type

    11.2.0.4
    Bug 14794472 – EXPDP TOO SLOW HAVING TOO MANY TABLESPACES
    Bug 16856028 – EXPORT DATAPUMP SLOW ON DATAGUARD STANDBY INSTANCE
    Bug 18469379 – Data pump export estimate phase takes a long time to determine if table is empty
    Bug 18793246 – EXPDP slow showing base object lookup during datapump export causes full table scan per object
    Bug 19674521 – EXPDP takes a long time when exporting a small table
    Bug 20111004 – “COMMENT ON COLUMN” statement waits 1 second on “Wait for Table Lock”
    Bug 20236523 – DATAPUMP EXPORT SLOW USING CONTENT=METADATA_ONLY
    Bug 20548904 – EXPDP HANG IN METADA_ONLY ON A PARTITION TABLE WITH AROUND 40000 SUBPARTITIONS
    Bug 20446613 – EXPORTING NON-STREAMS TABLE FROM STRADMIN SCHEMA OVER NETWORK LINK IS SLOW
    Bug 21286665 – EXPORT IS SLOW WAITING FOR “STREAMS AQ: ENQUEUE BLOCKED ON LOW MEMORY”

    Note:
    在11.2.0.4上发布的merge patch 20883577包含了以下bug的fix: 18469379, 18793246, 19674521, 20236523 and 20548904
    在11.2.0.4上发布的merge patch 21443197包含了以下bug的fix: 18082965 18469379 18793246 20236523 19674521 20532904 20548904

    12.1.0.1
    Bug 18469379 – Data pump export estimate phase takes a long time to determine if table is empty
    Bug 18793246 – EXPDP slow showing base object lookup during datapump export causes full table scan per object
    – Unpublished Bug 18720801 – DATAPUMP EXPORT IS SLOW DUE TO EXPORT OF SYNOPSES
    Bug 20111004 – “COMMENT ON COLUMN” statement waits 1 second on “Wait for Table Lock”

    12.1.0.2
    Bug 18793246 – EXPDP slow showing base object lookup during datapump export causes full table scan per object
    Bug 20236523 – DATAPUMP EXPORT SLOW USING CONTENT=METADATA_ONLY
    – Unpublished Bug 17662403 – DATA PUMP EXPORT: SLOW I/O PERFORMANCE WRITING TO NFS DISKS
    Bug 20548904 – EXPDP HANG IN METADA_ONLY ON A PARTITION TABLE WITH AROUND 40000 SUBPARTITIONS
    Bug 21128593 – UPDATING THE MASTER TABLE AT THE END OF DP JOB IS SLOW STARTING WITH 12.1.0.2

    Note:
    在12.1.0.2上发布的merge patch 20687195包含了以下bug的fix: 18793246, 20236523 and 20548904
    在12.1.0.2上发布的merge patch 21554480包含了以下bug的fix: 18793246, 20236523, 20548904 and 21128593.

    Import DataPump (impdp):

    10.1.0.1.0    10.1.0.3.0
    – Bug 3447032 – Import Data Pump is slow when importing statistics
    Bug:5292551 – Import Data Pump runs out of memory (ORA-04030) and can be very slow on certain tables
    – Bug 5555463 – Import Data Pump can be slow when importing small LOBs in External Table mode

    10.1.0.4.0
    Bug:5292551 – Import Data Pump runs out of memory (ORA-04030) and can be very slow on certain tables
    – Bug 5555463 – Import Data Pump can be slow when importing small LOBs in External Table mode

    10.1.0.5.0
    – Bug 3508675 – Import Data Pump is slow when importing TABLE_DATA
    Bug:5292551 – Import Data Pump runs out of memory (ORA-04030) and can be very slow on certain tables
    – Bug 5555463 – Import Data Pump can be slow when importing small LOBs in External Table mode

    10.2.0.1.0    10.2.0.3.0
    Bug:5071931 – Import Data Pump with REMAP_SCHEMA and REMAP_TABLESPACE is slow
    Bug:5292551 – Import Data Pump runs out of memory (ORA-04030) and can be very slow on certain tables
    Bug 6989875 -Transportable Tablespace Import Spins Using CPU
    – Bug 5555463 – Import Data Pump can be slow when importing small LOBs in External Table mode

    10.2.0.4.0
    Bug 7439689 – (affects earlier versions as well) Impdp workeer process spinning on MERGE statement

    11.1.0.6.0
    Bug 7585314 – OCSSD.BIN consumes much too much CPU while running Datapump

    11.1.0.7.0
    Bug 8363441 – Very Expensive Sql Statement During Datapump Import With Many Subpartitions

    11.2.0.2
    Bug 13609098 – IMPORTING SMALL SECUREFILE LOBS USING DATA PUMP IS SLOW
    Bug 16396856 – TTS IMPDP SEEMS TO HANG AND CONSUME 100% CPU

    11.2.0.3
    Bug 13609098 – IMPORTING SMALL SECUREFILE LOBS USING DATA PUMP IS SLOW
    Bug 14834638 – Import slow on create partitioned index
    Bug 16396856 – TTS IMPDP SEEMS TO HANG AND CONSUME 100% CPU
    Bug 19520061 – IMPDP: EXTREMELY SLOW IMPORT FOR A PARTITIONED TABLE
    Bug 20532904 DATAPUMP SLOW FOR PARTITIONED TABLE
    Bug 14192178 – EXPDP of partitioned table can be slow
    注意:expdp的bug 14192178的fix对一些impdp/import以及一些DBMS_METADATA的查询也有帮助

    11.2.0.4
    Bug 13609098 – IMPORTING SMALL SECUREFILE LOBS USING DATA PUMP IS SLOW
    Bug 19520061 – IMPDP: EXTREMELY SLOW IMPORT FOR A PARTITIONED TABLE

    12.1.0.1
    Bug 16396856 – TTS IMPDP SEEMS TO HANG AND CONSUME 100% CPU

    缺陷详细信息

    1. Bug 3447032 – Import Data Pump is slow when importing statistics
      – 缺陷:Bug 3447032“DBMS_STATS.SET_COLUMN_STATS can be slow (affects IMPORT)”(不是公开的 bug)
      – 症状:导入 INDEX_STATISTICS 或 TABLE_STATISTICS 时,Import(传统客户端)或 Import Data Pump 作业可能显示很长的等待时间
      – 版本:10.1.0.3.0 及更低版本
      – 已在以下版本中修正:10.1.0.4.0 及更高版本;对于某些平台, Patch:3447032 提供了针对 10.1.0.3.0 的修正
      – 打过补丁的文件:exuazo.o  kustat.xsl
      – 变通方案:排除统计信息导入 (EXCLUDE=statistics),并在导入完成后手动创建统计信息
      – 原因:如何在带有(许多)子分区的表中设置列统计信息的问题
      – 跟踪:SQL 跟踪显示对 DBMS_STATS 包的引用
      – 备注:必须在两个站点(源数据库和目标数据库)上都应用此 bug 的修正,且必须重新生成全部的 Export 或 Export Data Pump 转储文件,以便在导入时获取性能提升。
      .
    2. Bug 3508675 – Import Data Pump is slow when importing TABLE_DATA
      – 缺陷:Bug 3508675“APPSST10G: BAD PLAN WHEN QUERYING ALL_POLICIES WHEN IMPORTING TABLE_DATA”(不是公开的 bug)
      – 症状:在 TABLE_DATA 的导入阶段,impdp 作业可能会显示较高的 CPU 使用率和较慢的运行速度
      – 版本:10.1.0.5.0
      – 已在以下版本中修正:10.2.0.1.0 及更高版本; Patch:3508675 提供了可用于 10.1.0.5.0 的通用修正
      – 打过补丁的文件:prvtbpdi.plb
      – 变通方案:无
      – 原因:伴随 Bug 3369744 的修正而产生,ALL_SYNONYMS 视图不显示同义词的同义词(不是公开的 bug)
      – 跟踪:SQL 跟踪和 AWR 跟踪显示了查询的执行时间和较高 CPU 使用率:
      SELECT count(*) FROM ALL_POLICIES WHERE enable = :y and ins = :y2 and object_name = :tname and object_owner = :sname
      – 备注:该 bug 可能会出现在 Oracle Application 数据库(apps)或导入了许多个表的任何其他目标数据库的 impdp 作业期间。
      .
    3. Bug 4513695 – Export Data Pump of large table can be very slow when CURSOR_SHARING=SIMILAR
      – 缺陷:  Bug:4513695 “Poor performance for SELECT with ROWNUM=1 with literal replacement”
      – 症状:大型表 (100+ Gb) 的 Export Data Pump 作业速度可能要比原始 exp 客户端的导出慢很多(例如,前者的导出时间超过 24 小时)
      – 版本:10.1.0.x 和 10.2.0.3.0 及更低版本
      – 已在以下版本中修正:10.2.0.4.0 及更高版本;对于某些平台,Patch:5481520 提供了针对 10.2.0.3.0 的修正
      – 打过补丁的文件:apa.o kko.o kkofkr.o qerco.o
      – 变通方案:如果可能,在开始 Export Data Pump 作业之前先设置 CURSOR_SHARING=EXACT
      – 原因:将 cursor_sharing 设置为 similar 时,基于成本的优化器(Cost Base Optimizer,CBO)中出现查询优化问题
      – 跟踪:Data Pump Worker 跟踪显示“SELECT NVL((SELECT /*+ NESTED_TABLE_GET_REFS */ :”SYS_B_0″ FROM … WHERE ROWNUM = :”SYS_B_1″), :”SYS_B_2″) FROM DUAL”的 elapsed fetch 时间值非常高
      – 备注:针对此缺陷的修正只能作为 Bug:5481520 “Wrong results with ROWNUM and bind peeking”
      的修正予以提供。
      .
    4. Bug 5071931 – Import Data Pump with REMAP_SCHEMA and REMAP_TABLESPACE is slow
      – 缺陷:Bug:5071931 “DATAPUMP IMPORT WITH REMAP TABLESPACE, AND SCHEMA IS VERY SLOW”
      – 症状:在 DDL 的导入阶段,使用 REMAP_SCHEMA 和 REMAP_TABLESPACE 进行的 impdp 作业运行缓慢,例如:TABLE、INDEX、OBJECT_GRANT
      – 版本:10.2.0.1.0 至 10.2.0.3.0
      – 已在以下版本中修正:10.2.0.4.0 及更高版本;Patch:5071931 提供了适用于 10.2.0.3.0 的通用修正,且对于某些平台,该补丁还提供了针对较低版本的修正
      – 打过补丁的文件:prvtmeti.plb
      – 变通方案:如果不需要,则不使用 REMAP_% 参数
      – 原因:将多个转换链接在一起时出现了问题
      – 跟踪:Data Pump Worker 跟踪显示“DBMS_METADATA.CONVERT called”与“DBMS_METADATA.CONVERT returned”之间的 elapsed 时间值较高
      – 备注:此缺陷在 Oracle10g Release 1 中不会重现;有关详细信息,另请参阅
      Note:429846.1 “Slow Data Pump with REMAP_SCHEMA and REMAP_TABLESPACE parameters”.
      .
    5. Bug 5095025 – Export Data Pump runs out of memory (ORA-4030) when exporting many schema’s
      – 缺陷:Bug 5095025“ORA-4030 (KXS-HEAP-C,TEMPORARY MEMORY) USING EXPDP”(不是公开的 bug)
      – 症状:在导出过程式的对象(比如 schema jobs)时,许多 schema(例如 50+)的 schema 级别 expdp 作业可能会因 PGA 耗尽(内存泄露)而失败
      – 版本:10.1.0.x 和 10.2.0.3.0 及更低版本
      – 已在以下版本中修正:10.2.0.4.0 和更高版本
      – 打过补丁的文件:( patchset 中)
      – 变通方案:如果可能,运行多个 Export Data Pump 作业,使每个作业都导出较少的 schema
      – 原因:查询优化问题(基于规则的优化器(Rule Based Optimizer,RBO),而不是基于成本的优化器 (CBO))
      – 跟踪:ORA-4030 和 Data Pump Worker 跟踪可能会显示对以下语句的引用:“SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2(‘PROCDEPOBJ_T’, …”
      – 备注:与此缺陷相关的内容:Bug:5464834Bug:5928639 和 Bug 5929373(不是公开的 bug)
      .
    6. Bug 5292551 – Import Data Pump runs out of memory (ORA-04030) and can be very slow on certain tables
      – 缺陷:Bug:5292551 “IMPDP VERY SLOW WHEN IMPORTING A TABLE WITH INITIALIZED COLUMN OF TYPE VARRAY”
      – 症状:导入表数据时,特定表(例如包含 Spatial 数据 MDSYS.SDO_GEOMETRY 的表)的 impdp 作业速度可能会非常慢,且在加载这些表时,Data Pump Worker 进程显示内存使用量在不断增加
      – 版本:10.1.0.x 和 10.2.0.3.0 及更低版本
      – 已在以下版本中修正:10.2.0.4.0 及更高版本;对于某些平台,Patch:5292551 提供了针对 10.2.0.3.0 的修正
      – 打过补丁的文件:kpudp.o
      – 变通方案:如果可能,排除这些表:EXCLUDE=TABLE:”in(‘TAB_NAME’, …),并在第二次的表级别 Import Data Pump 作业中单独导入这些表:TABLES=owner.tab_name
      – 原因:内存没有释放,这导致存在较大数量的已分配内存
      – 跟踪:Heapdump 显示多个 freeable chunk“reeable assoc with marc”或“klcalh:ld_hds”
      – 备注:在运行数天之后,impdp 作业可能会失败,并出现错误,例如 ORA-4030(out of process memory when trying to allocate xxx bytes(在尝试分配 xxx 字节时进程内存不足))或 ORA-31626(job does not exist(作业不存在))或内部错误 ORA-00600 [729]、[12432]、[space leak]。
      .
    7. Bug 5464834 – Export Data Pump runs out of memory (ORA-4030) when many tables are involved
      – 缺陷:Bug:5464834 “ORA-4030 (KXS-HEAP-C,TEMPORARY MEMORY) USING EXPDP”
      – 症状:导出表数据时,许多表(例如 250+)的表级别 expdp 作业可能会因 PGA 耗尽(内存泄露)而失败
      – 版本:10.1.0.x 和 10.2.0.3.0 及更低版本
      – 已在以下版本中修正:10.2.0.4.0 及更高版本;Patch:5464834 提供了适用于 10.1.0.4.0 和 10.2.0.3.0 的通用修正
      – 打过补丁的文件:catmeta.sql  prvtmeti.plb
      – 变通方案:如果可能,运行多个 Export Data Pump 作业,使每个作业都导出较少数量的表
      – 原因:查询优化问题(基于规则的优化器 (RBO),而不是基于成本的优化器 (CBO))
      – 跟踪:ORA-4030 和Data Pump Worker 跟踪可能显示对以下语句的引用:“SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2(‘TABLE_DATA_T’, …”
      – 备注:与此缺陷相关的内容:Bug 5095025(不是公开的 bug)、Bug:5928639 和 Bug 5929373(不是公开的 bug)。
    8. Bug 5555463 – Import Data Pump can be slow when importing small LOBs (under 256K)
      – 缺陷:Bug 5555463“PERFORMANCE ISSUES FOR DATAPUMP IMPORT/EXTERNAL_TABLE MODE OF TABLES WITH LOBS”(不是公开的 bug)
      – 症状:在导入包含小 LOB(小于 256 kb 的 LOB)的表时,发生性能下降、高 CPU 使用率以及 LOB redo 生成的情况
      – 版本:10.1.0.x 和 10.2.0.3.0 及更低版本
      – 已在以下版本中修正:10.2.0.4.0 及更高版本
      – 打过补丁的文件:(在 patchset 中)
      – 变通方案:无(如果可能,在“Direct Path”模式下运行加载:ACCESS_METHOD=DIRECT_PATH)
      – 原因:在“External Table”模式下加载数据时使用临时 LOB
      – 跟踪:(无详细信息)
      – 备注:在“Direct Path”模式下,相同表数据的 impdp 作业显示更快的性能
      .
    9. Bug 5590185 – Consistent Export Data Pump is slow when exporting row data
      – 缺陷:Bug:5590185 “CONSISTENT EXPORT DATA PUMP JOB (FLASHBACK_TIME) HAS SLOWER PERFORMANCE”
      – 症状:在使用 FLASHBACK_TIME 或 FLASHBACK_SCN 时或在使用 logical standby 或 Streams 时,涉及较大数量表的 expdp 作业运行缓慢
      – 版本:10.1.0.x 和 10.2.0.3.0 及更低版本
      – 已在以下版本中修正:10.2.0.4.0 及更高版本;对于某些平台,Patch:5590185 提供了针对 10.2.0.2.0 的修正
      – 打过补丁的文件:prvtbpm.plb
      – 变通方案:如果不需要,则不运行一致性 Export Data Pump 作业
      – 原因:针对数据泵主表的全表扫描
      – 跟踪:SQL 跟踪显示以下语句的执行时间:
      UPDATE “SYSTEM”.”SYS_EXPORT_SCHEMA_01″ SET scn = :1, flags = :2 WHERE (object_path_seqno = :3) AND (base_process_order = :4) AND (process_order > 0)
      – 备注:如果正常的 expdp 作业需要 1 个小时,则现在相同的一致性作业可能需要 8 个小时以上的时间。
      .
    10. Bug 5928639 – Export Data Pump can be very slow if CURSOR_SHARING is not EXACT
      – 缺陷:Bug:5928639 “DATAPUMP EXPORT SLOW WHEN CURSOR_SHARING is not EXACT”
      – 症状:如果涉及到多个表且未将 init.ora 或 spfile 参数 CURSOR_SHARING 设置为 EXACT,则 Export Data Pump 作业的运行速度可能会比较慢
      – 版本:10.1.0.x 和 10.2.0.3.0 及更低版本
      – 已在以下版本中修正:10.2.0.4.0 及更高版本,已包含 Bug:5464834 的修正(见上文)
      – 打过补丁的文件:catmeta.sql prvtmeti.plb
      – 变通方案:设置 spfile 参数 CURSOR_SHARING=EXACT
      – 原因:查询优化问题(基于规则的优化器 (RBO),而不是基于成本的优化器 (CBO))
      – 跟踪:Data Pump Worker 跟踪文件显示调用 DBMS_METADATA.FETCH_XML_CLOB 的等待时间较长,SQL 跟踪文件显示对以下语句的引用:“SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2(‘TABLE_DATA_T’, …”
      – 备注:与此缺陷相关的内容:Bug 5095025(不是公开的 bug)、Bug:5464834 和 Bug 5929373(不是公开的 bug)。
      .
    11. Bug 5929373 – Export Data Pump of a table can be very slow if database has many user tables
      – 缺陷:Bug 5929373“APPS ST GSI – DATA PUMP TAKES LONG TIME TO EXPORT DATA”(不是公开的 bug)
      – 症状:如果数据库具有多个用户表,则小表的 Export Data Pump 作业的运行速度可能会比较慢
      – 版本:10.1.0.x 和 10.2.0.3.0 及更低版本
      – 已在以下版本中修正:10.2.0.4.0 及更高版本,已包含 Bug:5464834 的修正(见上文)
      – 打过补丁的文件:catmeta.sql prvtmeti.plb
      – 变通方案:无
      – 原因:查询优化问题(基于规则的优化器 (RBO),而不是基于成本的优化器 (CBO))
      – 跟踪:Data Pump Worker 跟踪文件显示调用 DBMS_METADATA.FETCH_XML_CLOB 的等待时间较长,SQL 跟踪文件显示对以下语句的引用:“SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2(‘TABLE_DATA_T’, …”
      – 备注:数据泵可能需一个小时以上的时间来处理表,而原始的导出客户端则只需要两三分钟;与此缺陷相关的内容:Bug 5095025(不是公开的 bug)、Bug:5464834Bug:5928639
    12. Bug 7722575 -DATAPUMP VIEW KU$_NTABLE_DATA_VIEW causes poor plan / slow Expdp
      – 缺陷:Bug 7722575“DATAPUMP VIEW KU$_NTABLE_DATA_VIEW CAUSES POOR PLAN / SLOW EXPDP”
      – 症状:数据泵视图 KU$_NTABLE_DATA_VIEW 和
      KU$_NTABLE_BYTES_ALLOC_VIEW 的定义可能会导致执行计划不甚理想以及数据泵导出视图的查询性能不佳
      – 版本:10.2.0.x 和 11.1.0.X
      – 已在以下版本中修正:10.2.0.5.0 和 11.2
      – 打过补丁的文件:catmeta.sql
      – 变通方案:无
      – 原因:ku$_ntable_data_view 数据泵视图的定义不正确
      – 跟踪:SQL 跟踪文件显示以下语句的执行计划成本过高:
      SELECT /*+all_rows*/ SYS_XMLGEN(VALUE(KU$),  XMLFORMAT.createFormat2(‘TABLE_DATA_T’, ‘7’)), 0 ,KU$.BASE_OBJ.NAME , …
      FROM SYS.KU$_TABLE_DATA_VIEW KU$ WHERE ……
    13. Bug 10178675 – expdp slow with processing functional_and_bitmap/index
      – 缺陷:Bug:10178675 “expdp slow with processing functional_and_bitmap/index”
      – 症状:EXPDP 显示以下步骤消耗时间过长:
      Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
      – 版本:10.2.0.4、11.1.0.7、11.2.0.1、11.2.0.2
      – 已在以下版本中修正:11.2.0.3、12.1
      – 打过补丁的文件:prvtmeta.plb、prvtmeti.plb
      – 变通方案:无
      – 原因:导出域索引时,其内部使用的是视图 ku$_2ndtab_info_view。使用 RBO时,此视图上的 select 会生成不良计划并耗费更多时间。
      – 跟踪:Expdp Worker (DW) 显示,执行以下形式的 SQL 花费了很长时间:
      SELECT INDEX_NAME, INDEX_SCHEMA, TYPE_NAME, TYPE_SCHEMA, FLAGS FROM SYS.KU$_2NDTAB_INFO_VIEW WHERE OBJ_NUM=:B1
    14. Bug 10194031 – EXPDP OF OR XML LEAKS MEMORY / RUNS SLOW 11.2.0.1 WORKS 11.1.0.7
      – 缺陷:Bug:10194031 – EXPDP OF OR XML LEAKS MEMORY / RUNS SLOW 11.2.0.1 WORKS 11.1.0.7
      – 症状:产生 ORA-4030 错误之前,包含 XMLTYPE 列的表的导出速度可能会非常慢。在尝试导出整个用户表或单独的表时,会发生此问题。
      – 版本:11.2.0.1、11.2.0.2
      – 已在以下版本中修正:11.2.0.3、12.1
      – 变通方案:无
      – 原因:对包含 xmltype 数据的表运行 expdp 时,发生内存泄露
    15. Bug 8904037 – LT_CTX_PKG.SCHEMA_INFO_EXP IS TAKING MORE TIME WHILE EXPORTING PROCOBJ OBJECTS
      – 缺陷:Bug 8904037 – LT_CTX_PKG.SCHEMA_INFO_EXP IS TAKING MORE TIME WHILE EXPORTING PROCOBJ OBJECTS
      – 症状:导出操作在对象类型为 DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM 上花费时间过长。
      – 版本:11.1.0.7, 11.2.0.1
      – 已在以下版本中修正:11.2.0.2, 12.1
      – 变通方案:移除 Workspace Manager 选项
      – 原因:由于在11.1.0.7中引入的函数”setCallStackAsValid”

    对于11.2.0.3, patch 16038089 中包含了以下修复:

    参考

    NOTE:1290574.1 – Datapump Performance Issue With Content=Metadata_only
    BUG:5071931 – DATAPUMP IMPORT WITH REMAP TABLESPACE, AND SCHEMA IS VERY SLOW

    BUG:5292551 – IMPDP VERY SLOW WHEN IMPORTING A TABLE WITH INITIALIZED COLUMN OF TYPE VARRAY

    NOTE:331221.1 – 10g Export/Import Process for Oracle Applications Release 11i
    NOTE:362205.1 – 10g Release 2 Export/Import Process for Oracle Applications Release 11i
    NOTE:365459.1 – Parallel Capabilities of Oracle Data Pump
    BUG:7439689 – IMPDP HANGS ON IDLE EVENT ‘WAIT FOR UNREAD MESSAGE ON BROADCAST CHANNEL’
    NOTE:421441.1 – DataPump Import Via NETWORK_LINK Is Slow With CURSOR_SHARING=FORCE
    NOTE:762160.1 – DataPump Import (IMPDP) Hangs When Using Parameters TRANSPORT_DATAFILES and REMAP_DATAFILE
    NOTE:786165.1 – Understanding the ESTIMATE and ESTIMATE_ONLY Parameters in Export DataPump
    BUG:6807289 – IMPDP WITH REMAP_SCHEMA AND REMAP_TABLESPACE HANGS AT TABLE STATISTICS
    BUG:6989875 – TRANSPORTABLE TABLESPACE IMPORT SPINS USING CPU
    BUG:5464834 – ORA-4030 USING EXPDP
    BUG:5928639 – DATAPUMP EXPORT SLOW WHEN CURSOR_SHARING != EXACT

    BUG:7722575 – DATAPUMP VIEW KU$_NTABLE_DATA_VIEW CAUSES POOR PLAN / SLOW EXPDP
    BUG:5481520 – WRONG RESULTS WITH ROWNUM AND BIND PEEKING
    NOTE:429846.1 – Slow Data Pump with REMAP_SCHEMA and REMAP_TABLESPACE parameters
    NOTE:94036.1 – Init.ora Parameter “CURSOR_SHARING” Reference Note
    NOTE:155477.1 – Parameter DIRECT: Conventional Path Export Versus Direct Path Export

    BUG:8363441 – VERY EXPENSIVE SQL STATEMENT DURING DATAPUMP IMPORT WITH MANY SUBPARTITIONS
    BUG:5996665 – EXPDP HANGING MORE THAN 5 HOURS
    NOTE:277905.1 – Export/Import DataPump Parameter TABLES – How to Export and Import Tables Residing in Different Schemas
    BUG:10178675 – EXPDP SLOW WITH PROCESSING FUNCTIONAL_AND_BITMAP/INDEX
    BUG:7585314 – OCSSD.BIN CONSUMING 6 TIMES MORE CPU IF EXCESSIVE DATAPUMP IS RUNNING ON NODE
    BUG:7710931 – DATAPUMP EXPORT IS EXTREMELY SLOW WHEN EXTRACTING SCHEMA
    NOTE:14834638.8 – Bug 14834638 – IMPDP import slow on create partitioned index
    NOTE:1673445.1 – EXPDP Estimate Phase Takes a Long Time With 12.1.0.1
    NOTE:885388.1 – DataPump Export Is Slow After Upgrade To 11g When Workspace Manager Is Installed
    BUG:7413726 – POOR EXPDP PERFORMANCE WHEN DB COMPATIBLE=10.2.0.3 OR 10.2.0.4

    NOTE:223730.1 – Automatic PGA Memory Management
    BUG:10194031 – EXPDP OF OR XML LEAKS MEMORY / RUNS SLOW 11.2.0.1 WORKS 11.1.0.7
    BUG:10416375 – DATA PUMP EXPDP JUST HANG ON KU$_TEMP_SUBPARTDATA_VIEW

    BUG:7710931 – DATAPUMP EXPORT IS EXTREMELY SLOW WHEN EXTRACTING SCHEMA
    BUG:4438573 – DATAPUMP RUNS VERY SLOW OVER NETWORK FOR TABLES WITH CLOBS
    BUG:4513695 – SELECT WITH ROWNUM=1 PERFORMANCE IS TOO LATE USING CURSOR_SHARING=SIMILAR
    BUG:5573425 – NON-CORRELATED SUBQUERY RETURNS WRONG RESULTS, LIKE A CARTESIAN JOIN
    BUG:5590185 – CONSISTENT EXPORT DATA PUMP JOB HAS SLOWER PERFORMANCE
    BUG:8225599 – ER: CTAS WITH LOB ACCESS ACROSS DATABASE LINK IS SLOW
    NOTE:286496.1 – Export/Import DataPump Parameter TRACE – How to Diagnose Oracle Data Pump
    BUG:6460304 – EXPDP TAKES MORE TIME

    Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) (文档 ID 453895.1)

     

    APPLIES TO:

    Oracle Database – Enterprise Edition – Version 10.1.0.2 to 12.1.0.2 [Release 10.1 to 12.1]
    Information in this document applies to any platform.

    PURPOSE

    This document provides information about possible causes of performance related problems when using export DataPump and Import DataPump to transfer data from an Oracle database.

    SCOPE

    The article is intended for users of the Oracle10g and Oracle11g database who use the Export Data Pump utility to export data from an Oracle source database and the Import Data Pump utility to import into an Oracle target database. This document is only applicable to the new clients Export Data Pump (expdp) and Import Data Pump (impdp) and does not apply to the original export (exp) and import (imp) clients. For Oracle10g and higher, we recommend the usage the Data Pump to transfer data between Oracle databases.

    DETAILS

    INTRODUCTION

    Starting with release 10g (10.1.0), Oracle introduced the new Oracle Data Pump technology, which enables very high-speed movement of data and metadata from one database to another. This technology is the basis for Oracle’s new data movement utilities, Data Pump Export and Data Pump Import.

    Under certain circumstances, a performance problem may be seen when unloading or loading data with the Data Pump clients. This document will provide details about setup and configuration settings that may have an impact on the performance of the Data Pump clients; will provide details how to check what Data Pump is doing at a specific moment; and will discuss some known defects that have an impact on the performance.

    PARAMETERS

    In this section, the Data Pump parameters are listed that may have an impact on the performance of an Export DataPump or import DataPump job. There are also some generic database parameters (init.ora / spfile) listed that are known to have a possible impact of the Data Pump jobs.
    If you experience and need to resolve a Data Pump performance issue, and one or more of following parameters are used for the job, then first check the remarks below and see whether this performance problem reproduces if the parameter is not used, or used differently.

    1. Data Pump parameter: PARALLEL

      For details, see also:
      Note:365459.1 “Parallel Capabilities of Oracle Data Pump”
      .
    2. Data Pump parameter: DUMPFILE

      .
    3. Export Data Pump parameter: ESTIMATE

      For details about the Export Data Pump parameter ESTIMATE, see also:
      Note.786165.1 “Understanding the ESTIMATE and ESTIMATE_ONLY parameter in Export DataPump”
      .
    4. Export Data Pump parameters: FLASHBACK_SCN and FLASHBACK_TIME

      .
    5. Import Data Pump parameter: TABLE_EXISTS_ACTION

      .
    6. Import Data Pump parameters: REMAP_SCHEMA or REMAP_TABLESPACE

      For details related to this issue, see also the section “Defects Details” below, and:
      Note:429846.1 “Slow Data Pump with REMAP_SCHEMA and REMAP_TABLESPACE parameters”
      .
    7. Database parameter: CURSOR_SHARING

      For details related to this issue, see also the section “Defects Details” below, and:
      Note:94036.1 “Init.ora Parameter “CURSOR_SHARING” Reference Note”
      Note:421441.1 “Datapump Import With dblink Going Slow With cursor_sharing Set to ‘force'”
      .
    8. Export/Import Data Pump parameter: STATUSMonitoring an in progress Data Pump job. This status information is written only to your standard output device, not to the log file (if one is in effect).

    CHECK ACTIVITY OF DATA PUMP

    KNOWN DEFECTS OVERVIEW

    Below an overview of known performance related defects in the various Oracle10g and Orace11g releases. See the next section after the overview for details about these defects and possible workarounds.

    Note 1: besides a Data Pump specific defect, there may also be a defect in a different area such as an optimizer related defect, which also has an impact on the performance during a Data Pump job. Only defects with highest impact have been listed below.

    Note 2: Defects that have an impact on the performance of Export Data Pump, will also have an impact on import Data Pump when import is done with the NETWORK_LINK parameter specified. Those defects are listed only once at the Export Data Pump section.

    Export DataPump (expdp):

    10.1.0.1.0  to  10.1.0.3.0

    – Bug 3447032 – Import Data Pump is slow when importing statistics
    Bug:4513695 – Poor performance for SELECT with ROWNUM=1 with literal replacement
    – Bug 5095025 – Export Data Pump runs out of memory (ORA-4030) when exporting many schema’s
    – Bug:5464834 – Export Data Pump runs out of memory (ORA-4030) when many tables are involved
    Bug:5590185 – Consistent Export Data Pump is slow when exporting row data
    Bug:5928639 – Export Data Pump of table can be very slow if CURSOR_SHARING is not EXACT
    – Bug 5929373 – Export Data Pump of a table can be very slow if database has many user tables10.1.0.4.0  to  10.1.0.5.0  and  10.2.0.1.0  to 10.2.0.3.0
    Bug:4513695 – Poor performance for SELECT with ROWNUM=1 with literal replacement
    – Bug 5095025 – Export Data Pump runs out of memory (ORA-4030) when exporting many schema’s
    Bug:5464834 – Export Data Pump runs out of memory (ORA-4030) when many tables are involved
    Bug:5590185 – Consistent Export Data Pump is slow when exporting row data
    Bug:5928639 – Export Data Pump of table can be very slow if CURSOR_SHARING is not EXACT
    – Bug 5929373 – Export Data Pump of a table can be very slow if database has many user tables
    Bug 5573425 – Slow Datapump with wrong results due to subquery unnesting and complex view

    10.2.0.4.0
    Bug 7413726 – Poor EXPDP performance when db COMPATIBLE=10.2.0.3 or 10.2.0.4 (duplicate of Bug 7710931)
    Bug 7710931 – DataPump export is extremely slow when extracting schema
    Bug 6460304 – (affects earlier versions as well) Expdp domain index dump using RULE Optimizer and slow
    Bug 7722575 -DATAPUMP VIEW KU$_NTABLE_DATA_VIEW causes poor plan / slow Expdp

    11.1.0.6.0
    Bug 7585314 – OCSSD.BIN consumes much too much CPU while running Datapump
    Bug 7722575 -DATAPUMP VIEW KU$_NTABLE_DATA_VIEW causes poor plan / slow Expdp

    11.1.0.7.0
    Bug 8363441 – Very Expensive Sql Statement During Datapump Import With Many Subpartitions
    Bug 7722575 -DATAPUMP VIEW KU$_NTABLE_DATA_VIEW causes poor plan / slow Expdp
    Bug 8904037 – LT_CTX_PKG.SCHEMA_INFO_EXP IS TAKING MORE TIME WHILE EXPORTING PROCOBJ OBJECTS

    11.2.0.1
    Bug 10178675 – expdp slow with processing functional_and_bitmap/index
    Bug 10194031 – EXPDP OF OR XML LEAKS MEMORY / RUNS SLOW 11.2.0.1 WORKS 11.1.0.7
    Bug 8904037 – LT_CTX_PKG.SCHEMA_INFO_EXP IS TAKING MORE TIME WHILE EXPORTING PROCOBJ OBJECTS

    11.2.0.3
    – Unpublished Bug 12780993 DATA PUMP PERFORMANCE FOR ESTIMATE=STATISTICS IN EXPORT IS BAD
    Bug 13573203 – SLOW INDEX EXPORT DUE TO PERFORMANCE ISSUE WITH METADATA KU$_INDEX_COL_VIEW
    Bug 13914808 – QUERY AGAINST KU$_INDEX_VIEW KU$ SLOW EVEN AFTER USING METADATA FROM 13844935
    Bug 14192178 – EXPDP of partitioned table can be slow
    Bug 14794472 – EXPDP TOO SLOW HAVING TOO MANY TABLESPACES
    Bug 16138607 – SLOW EXPDP AFTER 11.2.0.3 UPGRADE
    Bug 16298117 – TTS EXPDP TAKING 26 HOURS TO COMPLETE, MOST OF TIME PROCESSING INDEX INFO
    Bug 16856028 – EXPORT DATAPUMP SLOW ON DATAGUARD STANDBY INSTANCE
    Bug 18793246 – EXPDP slow showing base object lookup during datapump export causes full table scan per object
    Bug 20446613 – EXPORTING NON-STREAMS TABLE FROM STRADMIN SCHEMA OVER NETWORK LINK IS SLOW
    Bug 20236523 – DATAPUMP EXPORT SLOW USING CONTENT=METADATA_ONLY

    Note:
    1)

    For 11.2.0.3, also MLR Patch 16038089 is available which includes next fixes:
    Bug 12325243 – SLOW PERFORMANCE ON EXPDP FUNCTIONAL AND BITMAP INDEXES
    – Unpublished Bug 12780993 – DATA PUMP PERFORMANCE FOR ESTIMATE=STATISTICS IN EXPORT IS BAD
    Bug 13573203 – SLOW INDEX EXPORT DUE TO PERFORMANCE ISSUE WITH METADATA KU$_INDEX_COL_VIEW
    Bug 13844935 – QUERY AGAINST KU$_INDEX_VIEW SLOW IN 11.2.0.3
    Bug 14192178 – BUG 14006804 FIX DOES NOT RESOLVE THE PERFORMANCE ISSUE

    2)
    There is now a better fix available. MLR Patch 15893700 is available for 11.2.0.3 and MLR Patch 14742362 is available for versions 11.2.0.3.3 or higher. These are better options than Patch 16038089 because they contain the same fixes as 16038089 and some additional ones and they address the performance issues which are present with Patch 16038089.

    3)
    All 8 bugs which are fixed with Patch 14742362 are also fixed in patch set 11.2.0.4.
    Please refer to
    Note 1562142.1 – 11.2.0.4 Patch Set – List of Bug Fixes by Problem Type

    11.2.0.4
    Bug 14794472 – EXPDP TOO SLOW HAVING TOO MANY TABLESPACES
    Bug 16856028 – EXPORT DATAPUMP SLOW ON DATAGUARD STANDBY INSTANCE
    Bug 18469379 – Data pump export estimate phase takes a long time to determine if table is empty
    Bug 18793246 – EXPDP slow showing base object lookup during datapump export causes full table scan per object
    Bug 19674521 – EXPDP takes a long time when exporting a small table
    Bug 20111004 – “COMMENT ON COLUMN” statement waits 1 second on “Wait for Table Lock”
    Bug 20236523 – DATAPUMP EXPORT SLOW USING CONTENT=METADATA_ONLY
    Bug 20548904 – EXPDP HANG IN METADA_ONLY ON A PARTITION TABLE WITH AROUND 40000 SUBPARTITIONS
    Bug 20446613 – EXPORTING NON-STREAMS TABLE FROM STRADMIN SCHEMA OVER NETWORK LINK IS SLOW
    Bug 24560906 – HIGH CPU USAGE FOR PROCESS ORA_Q001_DBT11 AND ORA_Q007_DBT11

    Note:
    MLR Patch 20883577 released on top of 11.2.0.4 contains the fixes for the bugs: 18469379, 18793246, 19674521, 20236523 and 20548904
    or next merge patch including the above:
    MLR Patch 21443197 released on top of 11.2.0.4 contains the fixes for the bugs: 18082965 18469379 18793246 20236523 19674521 20532904 20548904

    12.1.0.1
    Bug 18469379 – Data pump export estimate phase takes a long time to determine if table is empty
    Bug 18793246 – EXPDP slow showing base object lookup during datapump export causes full table scan per object
    – Unpublished Bug 18720801 – DATAPUMP EXPORT IS SLOW DUE TO EXPORT OF SYNOPSES
    Bug 20111004 – “COMMENT ON COLUMN” statement waits 1 second on “Wait for Table Lock”

    12.1.0.2
    Bug 18793246 – EXPDP slow showing base object lookup during datapump export causes full table scan per object
    Bug 20236523 – DATAPUMP EXPORT SLOW USING CONTENT=METADATA_ONLY
    Bug 20548904 – EXPDP HANG IN METADA_ONLY ON A PARTITION TABLE WITH AROUND 40000 SUBPARTITIONS
    Bug 21128593 – UPDATING THE MASTER TABLE AT THE END OF DP JOB IS SLOW STARTING WITH 12.1.0.2
    Bug 24560906 – HIGH CPU USAGE FOR PROCESS ORA_Q001_DBT11 AND ORA_Q007_DBT11
    Bug 20636003 – Slow Parsing caused by Dynamic Sampling (DS_SVC) queries (side effects possible ORA-12751/ ORA-29771)

    Note:
    MLR Patch 20687195 released on top of 12.1.0.2 contains the fixes for the bugs: 18793246, 20236523 and 20548904
    MLR Patch 21554480 released on top of 12.1.0.2 contains the fixes for the bugs: 18793246, 20236523, 20548904 and 21128593.

    Import DataPump (impdp):

    10.1.0.1.0  to  10.1.0.3.0
    – Bug 3447032 – Import Data Pump is slow when importing statistics
    Bug:5292551 – Import Data Pump runs out of memory (ORA-04030) and can be very slow on certain tables
    – Bug 5555463 – Import Data Pump can be slow when importing small LOBs in External Table mode

    10.1.0.4.0
    Bug:5292551 – Import Data Pump runs out of memory (ORA-04030) and can be very slow on certain tables
    – Bug 5555463 – Import Data Pump can be slow when importing small LOBs in External Table mode

    10.1.0.5.0
    – Bug 3508675 – Import Data Pump is slow when importing TABLE_DATA
    Bug:5292551 – Import Data Pump runs out of memory (ORA-04030) and can be very slow on certain tables
    – Bug 5555463 – Import Data Pump can be slow when importing small LOBs in External Table mode

    10.2.0.1.0  to  10.2.0.3.0
    Bug:5071931 – Import Data Pump with REMAP_SCHEMA and REMAP_TABLESPACE is slow
    Bug:5292551 – Import Data Pump runs out of memory (ORA-04030) and can be very slow on certain tables
    Bug 6989875 -Transportable Tablespace Import Spins Using CPU
    – Bug 5555463 – Import Data Pump can be slow when importing small LOBs in External Table mode

    10.2.0.4.0
    Bug 7439689 – (affects earlier versions as well) Impdp workeer process spinning on MERGE statement

    11.1.0.6.0
    Bug 7585314 – OCSSD.BIN consumes much too much CPU while running Datapump

    11.1.0.7.0
    Bug 8363441 – Very Expensive Sql Statement During Datapump Import With Many Subpartitions

    11.2.0.2
    Bug 13609098 – IMPORTING SMALL SECUREFILE LOBS USING DATA PUMP IS SLOW
    Bug 16396856 – TTS IMPDP SEEMS TO HANG AND CONSUME 100% CPU

    11.2.0.3
    Bug 13609098 – IMPORTING SMALL SECUREFILE LOBS USING DATA PUMP IS SLOW
    Bug 14834638 – Import slow on create partitioned index
    Bug 16396856 – TTS IMPDP SEEMS TO HANG AND CONSUME 100% CPU
    Bug 19520061 – IMPDP: EXTREMELY SLOW IMPORT FOR A PARTITIONED TABLE
    Bug 20532904 DATAPUMP SLOW FOR PARTITIONED TABLE
    Bug 14192178 – EXPDP of partitioned table can be slow
    Note: The fix for expdp Bug 14192178 helps for some IMPDP / import operations and  some DBMS_METADATA queries.

    11.2.0.4
    Bug 13609098 – IMPORTING SMALL SECUREFILE LOBS USING DATA PUMP IS SLOW
    Bug 19520061 – IMPDP: EXTREMELY SLOW IMPORT FOR A PARTITIONED TABLE

    12.1.0.1
    Bug 16396856 – TTS IMPDP SEEMS TO HANG AND CONSUME 100% CPU

    12.1.0.2
    Bug 24423416 – IMPDP FOR SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY TAKES HOURS

    NOTE:
    =====
    When running the post install step of Generic DataPump patch in 12c Multitenant environment, you may be affected by Bug 23321125 – “DPLOAD DOESN’T CREATE THE SHARED OBJECTS ACROSS ALL PDBS”.
    For details and solution, please review:
    Note 2175021.1 – Alert – Multitenant Customers: The objects created by the post-install steps of 12.1.0.2 Generic DataPump Patches Are not Shared Across All PDBS.

     

    DEFECT DETAILS

    1. Bug 3447032 – Import Data Pump is slow when importing statistics
      Defect: Bug 3447032 “DBMS_STATS.SET_COLUMN_STATS can be slow (affects IMPORT)” (not a public bug)
      Symptoms: an Import (original client) or Import Data Pump job may show long wait times when importing INDEX_STATISTICS or TABLE_STATISTICS
      Releases: 10.1.0.3.0 and lower
      Fixed in: 10.1.0.4.0 and higher; for some platforms a fix on top of 10.1.0.3.0 is available with Patch:3447032
      Patched files: exuazo.o  kustat.xsl
      Workaround: exclude import of statistics (EXCLUDE=statistics) and manually create the statistics after the import completes
      Cause: issue how column statistics are set on tables with (many) sub-partitions
      Trace: SQL trace shows references to DBMS_STATS package
      Remarks: the fix for this bug has to be applied at both sites (source and target database) and any Export or Export Data Pump dumpfile has to be regenerated to get improved performance upon import.
      .
    2. Bug 3508675 – Import Data Pump is slow when importing TABLE_DATA
      Defect:  Bug 3508675 “APPSST10G: BAD PLAN WHEN QUERYING ALL_POLICIES WHEN IMPORTING TABLE_DATA” (not a public bug)
      Symptoms:  an impdp job may show high CPU usage and a slow down during the import phase of: TABLE_DATA
      Releases:  10.1.0.5.0
      Fixed in:  10.2.0.1.0 and higher; generic fix available for 10.1.0.5.0 with Patch:3508675
      Patched files:  prvtbpdi.plb
      Workaround:  none
      Cause:  introduced with fix for Bug 3369744 ALL_SYNONYMS view does not show synonym for a synonym (not a public bug)
      Trace:  SQL trace and AWR trace show high CPU usage and execution time for query:
      SELECT count(*) FROM ALL_POLICIES WHERE enable = :y and ins = :y2 and object_name = :tname and object_owner = :sname
      Remarks:  may show up during impdp job of Oracle Applications database (apps) or any other target database where many tables are imported.
      .
    3. Bug 4513695 – Export Data Pump of large table can be very slow when CURSOR_SHARING=SIMILAR
      Defect:  Bug:4513695 “Poor performance for SELECT with ROWNUM=1 with literal replacement”
      Symptoms:  an export Data Pump job of a large table (100+ Gb) can be much slower (e.g. 24+ hours) than an export with the original exp client
      Releases:  10.1.0.x and 10.2.0.3.0 and lower
      Fixed in:  10.2.0.4.0 and higher; for some platforms a fix on top of 10.2.0.3.0 is available with Patch:5481520
      Patched files:  apa.o kko.o kkofkr.o qerco.o
      Workaround:  if possible, set CURSOR_SHARING=EXACT before starting the export Data Pump job
      Cause:  query optimization issue in Cost Base Optimizer (CBO) when cursor_sharing is set to similar
      Trace:  Data Pump Worker trace shows very high elapsed fetch time for: “SELECT NVL((SELECT /*+ NESTED_TABLE_GET_REFS */ :”SYS_B_0″ FROM … WHERE ROWNUM = :”SYS_B_1″), :”SYS_B_2″) FROM DUAL”
      Remarks:  a fix for this defect can only be provided as a fix for Bug:5481520 “Wrong results with ROWNUM and bind peeking”.
      .
    4. Bug 5071931 – Import Data Pump with REMAP_SCHEMA and REMAP_TABLESPACE is slow
      DefectBug:5071931 “DATAPUMP IMPORT WITH REMAP TABLESPACE, AND SCHEMA IS VERY SLOW”
      Symptoms:  an impdp job with REMAP_SCHEMA and REMAP_TABLESPACE slows down during the import phase of DDL such as: TABLE, INDEX, OBJECT_GRANT
      Releases:  10.2.0.1.0 to 10.2.0.3.0
      Fixed in:  10.2.0.4.0 and higher; a generic fix available for 10.2.0.3.0 with Patch:5071931 and for some platforms a fix on top of lower releases is also available with the same number
      Patched files:  prvtmeti.plb
      Workaround:  if not required, do not use the REMAP_% parameters
      Cause:  problem when multiple transforms are chained together
      Trace:  Data Pump Worker trace shows high elapsed times between “DBMS_METADATA.CONVERT called” and “DBMS_METADATA.CONVERT returned”
      Remarks:  this defect does not reproduce in Oracle10g Release 1; for details, see also:
      Note:429846.1 “Slow Data Pump with REMAP_SCHEMA and REMAP_TABLESPACE parameters”.
      .
    5. Bug 5095025 – Export Data Pump runs out of memory (ORA-4030) when exporting many schema’s
      Defect:  Bug 5095025 “ORA-4030 (KXS-HEAP-C,TEMPORARY MEMORY) USING EXPDP” (not a public bug)
      Symptoms: a schema level expdp job of many schema’s (like 50+) and where procedural objects are involved (like schema jobs), may fail due to running out of PGA (leaking memory) when exporting procedural objects
      Releases:  10.1.0.x and 10.2.0.3.0 and lower
      Fixed in:  10.2.0.4.0 and higher
      Patched files:  (in patchset)
      Workaround:  if possible, run multiple export Data Pump jobs so every job has fewer schema’s to export
      Cause:  query optimization (Rule Based Optimizer (RBO) instead of Cost Base Optimizer (CBO))
      Trace:  ORA-4030 and Data Pump Worker trace may show reference to: “SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2(‘PROCDEPOBJ_T’, …”
      Remarks:  also related to this defect are: Bug:5464834 and Bug:5928639 and Bug 5929373 (not a public bug).
      .
    6. Bug 5292551 – Import Data Pump runs out of memory (ORA-04030) and can be very slow on certain tables
      DefectBug:5292551 “IMPDP VERY SLOW WHEN IMPORTING A TABLE WITH INITIALIZED COLUMN OF TYPE VARRAY”
      Symptoms:  an impdp job of specific tables (like tables with Spatial data MDSYS.SDO_GEOMETRY) can be very slow when importing table data and the Data Pump worker process shows a continuous increase of memory when loading those tables
      Releases:  10.1.0.x and 10.2.0.3.0 and lower
      Fixed in:  10.2.0.4.0 and higher; for some platforms a fix on top of 10.2.0.3.0 is available with Patch:5292551
      Patched files:  kpudp.o
      Workaround:  if possible, exclude those tables: EXCLUDE=TABLE:”in(‘TAB_NAME’, …) and import those tables separately in a second table level import Data Pump job: TABLES=owner.tab_name
      Cause:  memory was not released, resulting in high amount of allocated memory
      Trace:  Heapdumps show many freeable chunks ‘freeable assoc with marc’ or ‘klcalh:ld_hds’
      Remarks:  the impdp job may fail after running for days with errors such as ORA-4030 (out of process memory when trying to allocate xxx bytes) or or ORA-31626 (job does not exist) or internal error ORA-00600 [729], [12432], [space leak].
      .
    7. Bug 5464834 – Export Data Pump runs out of memory (ORA-4030) when many tables are involved
      Defect:  Bug:5464834 “ORA-4030 (KXS-HEAP-C,TEMPORARY MEMORY) USING EXPDP”
      Symptoms:  a table level expdp job of many tables (like 250+) may fail due to running out of PGA (leaking memory) when exporting table data
      Releases:  10.1.0.x and 10.2.0.3.0 and lower
      Fixed in:  10.2.0.4.0 and higher; generic fix available for 10.1.0.4.0 and 10.2.0.3.0 with Patch:5464834
      Patched files:  catmeta.sql  prvtmeti.plb
      Workaround:  if possible, run multiple export Data Pump jobs so every job has fewer tables to export
      Cause:  query optimization (Rule Based Optimizer (RBO) instead of Cost Base Optimizer (CBO))
      Trace:  ORA-4030 and Data Pump Worker trace may show reference to: “SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2(‘TABLE_DATA_T’, …”
      Remarks:  also related to this defect are: Bug 5095025 (not a public bug) and Bug:5928639 and Bug 5929373 (not a public bug).
      .
    8. Bug 5555463 – Import Data Pump can be slow when importing small LOBs (under 256K)
      Defect:  Bug 5555463 “PERFORMANCE ISSUES FOR DATAPUMP IMPORT/EXTERNAL_TABLE MODE OF TABLES WITH LOBS” (not a public bug)
      Symptoms:  slow performance, high CPU usage, and LOB redo generation when importing table with small LOBs (LOBs smaller than 256 kb)
      Releases:  10.1.0.x and 10.2.0.3.0 and lower
      Fixed in:  10.2.0.4.0 and higher
      Patched files:  (in patchset)
      Workaround:  none (if possible, run load in Direct Path mode: ACCESS_METHOD=DIRECT_PATH)
      Cause:  using temporary LOBs when loading data in External Table mode
      Trace:  (details not available)
      Remarks:  an impdp job of the same table data in Direct Path mode shows much faster performance.
      .
    9. Bug 5590185 – Consistent Export Data Pump is slow when exporting row data
      Defect:  Bug:5590185 “CONSISTENT EXPORT DATA PUMP JOB (FLASHBACK_TIME) HAS SLOWER PERFORMANCE”
      Symptoms:  an expdp job of large amount of tables is slow when using FLASHBACK_TIME or FLASHBACK_SCN or when logical standby or Streams are used
      Releases:  10.1.0.x and 10.2.0.3.0 and lower
      Fixed in:  10.2.0.4.0 and higher; for some platforms a fix on top of 10.2.0.2.0 is available with Patch:5590185
      Patched files:  prvtbpm.plb
      Workaround:  if not required, do not run a consistent Export Data Pump job
      Cause:  full table scans on DataPump’s Master table
      Trace:  SQL trace shows execution time for statement:
      UPDATE “SYSTEM”.”SYS_EXPORT_SCHEMA_01″ SET scn = :1, flags = :2 WHERE (object_path_seqno = :3) AND (base_process_order = :4) AND (process_order > 0)
      Remarks:  If a normal expdp job takes 1 hour, then the same job but now consistent, may take more than 8 hours.
      .
    10. Bug 5928639 – Export Data Pump can be very slow if CURSOR_SHARING is not EXACT
      Defect:  Bug:5928639 “DATAPUMP EXPORT SLOW WHEN CURSOR_SHARING is not EXACT”
      Symptoms:  an export Data Pump job can be slow if many tables are involved and init.ora or spfile parameter CURSOR_SHARING is not set to EXACT
      Releases:  10.1.0.x and 10.2.0.3.0 and lower
      Fixed in:  10.2.0.4.0 and higher with fix for Bug:5464834 (see above)
      Patched files:  catmeta.sql prvtmeti.plb
      Workaround:  set spfile parameter CURSOR_SHARING=EXACT
      Cause:  query optimization (Rule Based Optimizer (RBO) instead of Cost Base Optimizer (CBO))
      Trace:  Worker trace file shows high waits for DBMS_METADATA.FETCH_XML_CLOB called, and SQL trace file shows reference to: “SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2(‘TABLE_DATA_T’, …”
      Remarks:  also related to this defect are: Bug 5095025 (not a public bug) and Bug:5464834 and Bug 5929373 (not a public bug).
      .
    11. Bug 5929373 – Export Data Pump of a table can be very slow if database has many user tables
      Defect:  Bug 5929373 “APPS ST GSI – DATA PUMP TAKES LONG TIME TO EXPORT DATA” (not a public bug)
      Symptoms:  an export Data Pump job of a small table can be slow if database has many user tables
      Releases:  10.1.0.x and 10.2.0.3.0 and lower
      Fixed in:  10.2.0.4.0 and higher with fix for Bug:5464834 (see above)
      Patched files:  catmeta.sql prvtmeti.plb
      Workaround:  none
      Cause:  query optimization (Rule Based Optimizer (RBO) instead of Cost Base Optimizer (CBO))
      Trace:  Worker trace file shows high waits for DBMS_METADATA.FETCH_XML_CLOB called, and SQL trace file shows reference to: “SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2(‘TABLE_DATA_T’, …”
      Remarks:  Data Pump may need more than an hour for the table, while the original export client finishes in a couple of minutes; also related to this defect are: Bug 5095025 (not a public bug) and Bug:5464834 and Bug:5928639.
    12. Bug 7722575 -DATAPUMP VIEW KU$_NTABLE_DATA_VIEW causes poor plan / slow Expdp
      Defect: Bug 7722575 “DATAPUMP VIEW KU$_NTABLE_DATA_VIEW CAUSES POOR PLAN / SLOW EXPDP”
      Symptoms: The definition of datapump views KU$_NTABLE_DATA_VIEW and
      KU$_NTABLE_BYTES_ALLOC_VIEW can lead to a suboptimal execution plans and poor performance of queries against the view from Datapump export
      Releases: 10.2.0.x and 11.1.0.X
      Fixed in: 10.2.0.5.0 and 11.2
      Patched files: catmeta.sql
      Workaround: none
      Cause: incorrect definition of  ku$_ntable_data_view Datapump view
      Trace: The SQL trace file shows an expensive execution plan for :
      SELECT /*+all_rows*/ SYS_XMLGEN(VALUE(KU$),  XMLFORMAT.createFormat2(‘TABLE_DATA_T’, ‘7’)), 0 ,KU$.BASE_OBJ.NAME , …
      FROM SYS.KU$_TABLE_DATA_VIEW KU$ WHERE ……
    13. Bug 10178675 – expdp slow with processing functional_and_bitmap/index
      Defect: Bug 10178675 “expdp slow with processing functional_and_bitmap/index”
      Symptoms: EXPDP shows a long time on the message:
      Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
      – Releases: 10.2.0.4, 11.1.0.7, 11.2.0.1, 11.2.0.2
      Fixed in: 11.2.0.3, 12.1
      Patched files: prvtmeta.plb, prvtmeti.plb
      Workaround: none
      Cause: While exporting domain index, the view ku$_2ndtab_info_view is internally used. With RBO, select on this view is generating bad plan and consuming more time.
      Trace: Expdp worker (DW) shows a lot of time spent executing a SQL of the form:
      SELECT INDEX_NAME, INDEX_SCHEMA, TYPE_NAME, TYPE_SCHEMA, FLAGS FROM SYS.KU$_2NDTAB_INFO_VIEW WHERE OBJ_NUM=:B1
    14. Bug 10194031 – EXPDP OF OR XML LEAKS MEMORY / RUNS SLOW 11.2.0.1 WORKS 11.1.0.7
      – Defect: Bug 10194031 – EXPDP OF OR XML LEAKS MEMORY / RUNS SLOW 11.2.0.1 WORKS 11.1.0.7
      – Symptoms: Exporting tables that contain XMLTYPE columns runs very slow before raising ORA-4030 error. This happens when trying to export and entire user or an individual table.
      – Releases: 11.2.0.1, 11.2.0.2
      – Fixed in: 11.2.0.3, 12.1
      Workaround: none
      – Cause: Memory leak running expdp on tables containing xmltype data
    15. Bug 8904037 – LT_CTX_PKG.SCHEMA_INFO_EXP IS TAKING MORE TIME WHILE EXPORTING PROCOBJ OBJECTS
      – Defect: Bug 8904037 – LT_CTX_PKG.SCHEMA_INFO_EXP IS TAKING MORE TIME WHILE EXPORTING PROCOBJ OBJECTS
      – Symptoms: The export may appear to take a long time while processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
      – Releases: 11.1.0.7, 11.2.0.1
      – Fixed in: 11.2.0.2, 12.1
      Workaround: Remove the Workspace Manager option
      – Cause:new function “setCallStackAsValid” in 11.1.0.7

    Additional Resources

    Community: Database Utilities

    Still have questions? Use the above community to search for similar discussions or start a new discussion on this subject.

    REFERENCES

    BUG:7585314 – OCSSD.BIN CONSUMING 6 TIMES MORE CPU IF EXCESSIVE DATAPUMP IS RUNNING ON NODE
    NOTE:1673445.1 – EXPDP Estimate Phase Takes a Long Time With 12.1.0.1
    NOTE:1290574.1 – Datapump Performance Issue With Content=Metadata_only

    BUG:7710931 – DATAPUMP EXPORT IS EXTREMELY SLOW WHEN EXTRACTING SCHEMA
    BUG:5928639 – DATAPUMP EXPORT SLOW WHEN CURSOR_SHARING != EXACT
    BUG:4513695 – SELECT WITH ROWNUM=1 PERFORMANCE IS TOO LATE USING CURSOR_SHARING=SIMILAR
    NOTE:885388.1 – DataPump Export Is Slow After Upgrade To 11g When Workspace Manager Is Installed
    NOTE:223730.1 – Automatic PGA Memory Management
    BUG:7439689 – IMPDP HANGS ON IDLE EVENT ‘WAIT FOR UNREAD MESSAGE ON BROADCAST CHANNEL’
    NOTE:277905.1 – Export/Import DataPump Parameter TABLES – How to Export and Import Tables Residing in Different Schemas

    NOTE:429846.1 – Slow Data Pump with REMAP_SCHEMA and REMAP_TABLESPACE parameters

    BUG:7413726 – POOR EXPDP PERFORMANCE WHEN DB COMPATIBLE=10.2.0.3 OR 10.2.0.4

    BUG:5996665 – EXPDP HANGING MORE THAN 5 HOURS
    BUG:5071931 – DATAPUMP IMPORT WITH REMAP TABLESPACE, AND SCHEMA IS VERY SLOW
    BUG:6460304 – EXPDP TAKES MORE TIME
    BUG:20446613 – EXPORTING NON-STREAMS TABLE FROM STRADMIN SCHEMA OVER NETWORK LINK IS SLOW

    NOTE:286496.1 – Export/Import DataPump Parameter TRACE – How to Diagnose Oracle Data Pump
    NOTE:362205.1 – 10g Release 2 Export/Import Process for Oracle Applications Release 11i
    NOTE:365459.1 – Parallel Capabilities of Oracle Data Pump
    BUG:5590185 – CONSISTENT EXPORT DATA PUMP JOB HAS SLOWER PERFORMANCE
    BUG:10178675 – EXPDP SLOW WITH PROCESSING FUNCTIONAL_AND_BITMAP/INDEX
    NOTE:155477.1 – Parameter DIRECT: Conventional Path Export Versus Direct Path Export

    NOTE:2175021.1 – Alert – Multitenant Customers: The objects created by the post-install steps of 12.1.0.2 Generic DataPump Patches Are not Shared Across All PDBS
    BUG:8363441 – VERY EXPENSIVE SQL STATEMENT DURING DATAPUMP IMPORT WITH MANY SUBPARTITIONS
    BUG:5573425 – NON-CORRELATED SUBQUERY RETURNS WRONG RESULTS, LIKE A CARTESIAN JOIN

    NOTE:14834638.8 – Bug 14834638 – IMPDP import slow on create partitioned index
    BUG:5464834 – ORA-4030 USING EXPDP

    BUG:5481520 – WRONG RESULTS WITH ROWNUM AND BIND PEEKING
    NOTE:94036.1 – Init.ora Parameter “CURSOR_SHARING” Reference Note
    BUG:6807289 – IMPDP WITH REMAP_SCHEMA AND REMAP_TABLESPACE HANGS AT TABLE STATISTICS
    BUG:6989875 – TRANSPORTABLE TABLESPACE IMPORT SPINS USING CPU
    BUG:7722575 – DATAPUMP VIEW KU$_NTABLE_DATA_VIEW CAUSES POOR PLAN / SLOW EXPDP
    BUG:8225599 – ER: CTAS WITH LOB ACCESS ACROSS DATABASE LINK IS SLOW
    NOTE:421441.1 – DataPump Import Via NETWORK_LINK Is Slow With CURSOR_SHARING=FORCE
    NOTE:762160.1 – DataPump Import (IMPDP) Hangs When Using Parameters TRANSPORT_DATAFILES and REMAP_DATAFILE

    BUG:5292551 – IMPDP VERY SLOW WHEN IMPORTING A TABLE WITH INITIALIZED COLUMN OF TYPE VARRAY
    BUG:10194031 – EXPDP OF OR XML LEAKS MEMORY / RUNS SLOW 11.2.0.1 WORKS 11.1.0.7
    BUG:10416375 – DATA PUMP EXPDP JUST HANG ON KU$_TEMP_SUBPARTDATA_VIEW
    NOTE:331221.1 – 10g Export/Import Process for Oracle Applications Release 11i
    NOTE:786165.1 – Understanding the ESTIMATE and ESTIMATE_ONLY Parameters in Export DataPump
    BUG:4438573 – DATAPUMP RUNS VERY SLOW OVER NETWORK FOR TABLES WITH CLOBS
    BUG:24423416 – IMPDP FOR SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY TAKES HOURS

    Install Oracle 12cR2 on RHEL6

    目录 ORACLE, 数据库

    [TOC]

    Install Oracle 12cR2 on RHEL6

    操作系统

    Red Hat Enterprise Linux Server release 6.5

    IP HOSTNAME
    192.168.56.10 oel12c

    /etc/hosts 文件

     vi /etc/hosts
    192.168.56.10   oel12c

    /etc/sysctl.conf

    fs.file-max = 6815744
    kernel.sem = 250 32000 100 128
    kernel.shmmni = 4096
    kernel.shmall = 1073741824
    kernel.shmmax = 4398046511104
    kernel.panic_on_oops = 1
    net.core.rmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 1048576
    net.ipv4.conf.all.rp_filter = 2
    net.ipv4.conf.default.rp_filter = 2
    fs.aio-max-nr = 1048576
    net.ipv4.ip_local_port_range = 9000 65500
    /sbin/sysctl -p

    /etc/security/limits.conf

    oracle   soft   nofile    1024
    oracle   hard   nofile    65536
    oracle   soft   nproc    16384
    oracle   hard   nproc    16384
    oracle   soft   stack    10240
    oracle   hard   stack    32768
    oracle   hard   memlock    134217728
    oracle   soft   memlock    134217728

    依赖包

    yum install binutils -y
    yum install compat-libcap1 -y
    yum install compat-libstdc++-33 -y
    yum install compat-libstdc++-33.i686 -y
    yum install glibc -y
    yum install glibc.i686 -y
    yum install glibc-devel -y
    yum install glibc-devel.i686 -y
    yum install ksh -y
    yum install libaio -y
    yum install libaio.i686 -y
    yum install libaio-devel -y
    yum install libaio-devel.i686 -y
    yum install libX11 -y
    yum install libX11.i686 -y
    yum install libXau -y
    yum install libXau.i686 -y
    yum install libXi -y
    yum install libXi.i686 -y
    yum install libXtst -y
    yum install libXtst.i686 -y
    yum install libgcc -y
    yum install libgcc.i686 -y
    yum install libstdc++ -y
    yum install libstdc++.i686 -y
    yum install libstdc++-devel -y
    yum install libstdc++-devel.i686 -y
    yum install libxcb -y
    yum install libxcb.i686 -y
    yum install make -y
    yum install nfs-utils -y
    yum install net-tools -y
    yum install smartmontools -y
    yum install sysstat -y
    yum install unixODBC -y
    yum install unixODBC-devel -y
    yum install e2fsprogs -y
    yum install e2fsprogs-libs -y
    yum install libs -y
    yum install libxcb.i686 -y
    yum install libxcb -y

    关闭防火墙

    chkconfig iptables off
    service iptables stop

    禁用selinux

    vi /etc/selinux/config
    SELINUX=disabled
    setenforce 0 

    创建用户组及用户

    groupadd -g 54321 oinstall
    groupadd -g 54322 dba
    groupadd -g 54323 oper
    useradd -u 54321 -g oinstall -G dba,oper oracle
    passwd oracle

    创建目录

    mkdir -p /u01/app/oracle/product/12.2.0.1/db_1
    chown -R oracle:oinstall /u01
    chmod -R 775 /u01

    oracle 用户环境配置

    vi ~/.bash_profile
    
    # Set Oracle Environment
    ORACLE_BASE=/u01/app/oracle;export ORACLE_BASE
    ORACLE_HOME=$ORACLE_BASE/product/12.2.0.1/db_1; export ORACLE_HOME
    ORACLE_SID=enmo;export ORACLE_SID
    ORACLE_OWNER=oracle;export ORACLE_OWNER
    ORACLE_TERM=vt100;export ORACLE_TERM
    LD_LIBRARY_PATH=$ORACLE_HOME/lib;export LD_LIBRARY_PATH
    PS1=`hostname`'<$PWD>$';export PS1
    EDITOR=/bin/vi; export EDITOR
    JAVA_HOME=/usr/local/java;export JAVA_HOME
    ORA_NLS33=/u01/product/oracle/ocommon/nls/admin/data;export ORA_NLS33
    CLASSPATH=/u01/product/oracle/jdbc/lib/classesl11.zip:/usr/local/java;
    PATH=$ORACLE_HOME/bin:$JAVA_HOME/bin:$ORACLE_HOME/OPatch:$PATH:.;export PATH

    软件包准备

    Download Software

    OTN: Oracle Database 12c Release 2 (12.2.0.1) Software (64-bit).

    linuxx6412201database.zip

    Unzip the file.

    unzip linuxx64_12201_database.zip

    图形界面安装oracle 12R2 数据库软件

    ./runInstaller 启动图形安装界面

    配置安全更新

    只安装数据库软件

    安装单实例数据库软件

    root 用户运行脚本

    netca 创建监听

    dbca 建库

    dbca建库完成

     

     

    scp限制速度

    目录 Linux

    ftp等客户端工具都有限制速度的界面,刚查询了下scp也可以限制速度,查看help

    $scp –help
    usage: scp [-1246BCpqrv] [-c cipher] [-F ssh_config] [-i identity_file]
    [-l limit] [-o ssh_option] [-P port] [-S program]
    [[user@]host1:]file1 … [[user@]host2:]file2

     

    [root@liups.com~]# scp  -l2000 testcp1214.zip  localhost:/tmp/
    root@localhost’s password:
    testcp1214.zip                                                                                          0%  768KB 267.4KB/s   05:54 ETA^CKilled by signal 2.
    [www@liups.com~]# scp  -l20000 testcp1214.zip  localhost:/tmp/
    www@localhost’s password:
    testcp1214.zip                                                                                         18%   17MB   2.4MB/s   00:31 ETA^CKilled by signal 2.
    [www@liups.com~]# scp  testcp1214.zip  localhost:/tmp/
    www@localhost’s password:
    testcp1214.zip                                                                                        100%   93MB  31.1MB/s   00:03
    [www@liups.com~]#

    oracle expdp/impdp的常用用法总结

    目录 ORACLE, 数据库

    oracle expdp/impdp的常用用法总结

    1、导出某个用户下的所有东东

    schemas=schemas 名称 liups

    expdp system/liups.com schemas=liups DIRECTORY=DATA_PUMP_DIR dumpfile=liups.com.dmp logfile=liups.com.log

    2、只导出表结构不导出数据、或者只导出数据不导出表结构

    CONTENT
    指定要卸载的数据。
    有效的关键字值为: [ALL], DATA_ONLY 和 METADATA_ONLY 。
    DATA_ONLY :只导出数据
    METADATA_ONLY : 只导出表结构(元数据)
    只导出表结构不导出数据
    expdp system/liups.com schemas=liups DIRECTORY=DATA_PUMP_DIR CONTENT=METADATA_ONLY  dumpfile=liups.com.dmp logfile=liups.com.log

    只导出数据不导出表结构
    expdp system/liups.com schemas=liups DIRECTORY=DATA_PUMP_DIR CONTENT=DATA_ONLY  dumpfile=liups.com.dmp logfile=liups.com.log
    3、从一个用户导入到另一个用户
    REMAP_SCHEMA=元用户名称:新导入的用户名称(可以不存在)

    导出的时候以schemas的方式导出 导入的时候使用   REMAP_SCHEMA=元用户名称:新导入的用户名称(可以不存在)
    导出: expdp system/liups.com DIRECTORY=DATA_PUMP_DIR  schemas=liups.com dumpfile=liups.com.dmp logfile=liups.com.log
    导入: impdp system/liups.com DIRECTORY=DATA_PUMP_DIR dumpfile=liups.com.dmp REMAP_SCHEMA=liups.com:liups.org logfile=liups.com.log

    1) REMAP_SCHEMA : 重定义对象所属Schema
    该参数的作用类似IMP中的Fromuser+Touser,支持多个Schema的转换,语法如下:
    REMAP_SCHEMA=Source_schema:Target_schema[,Source_schema:Target_schema]
    如 把A的对象转换到C用户,将C转换到D用户。Remap_schema=a:b,c:d
    Warning: 不能在同一个IMPDP命令中指定remap_schema=a:b,a:c.

    3、从一个表空间导入到另一个表空间

    2) REMAP_TABLESPACE : 重定义对象所在的表空间。
    该参数用来重映射导入对象存储的表空间,支持同时对多个表空间进行 转换,相互间用逗号分割。语法如下:
    REMAP_TABLESPACE=Source_tablespace:Target_tablespace[,Source_tablespace:Target_tablespace]
    Warning: 如果使用Remap_tablespace参数,则要保证导入的用户对目标表空间有读写权限。

    4、过滤数据
    过滤数据主要依赖于Query和Sample两个参数。其中Sample参数主要针对expdp导出功能。
    1. Query
    与exp命令中的Query功能类似,不过Expdp中,该参数功能得到了增强,控制的粒度更细。Expdp中的Query也是指定类 似where语句来限定记录。语法如下:
    Query = [Schema.][Table_name:] Query_clause
    默认 情况如果不指定Schema.table_name,则Query_clause针对所有导出的表有效,或者你可以为每一个表指定不同的 Query_clause,如:导出a表中所有id<5的记录,导出b表中所有name=’a’的记录,则Query的参数应该如下:
    Query=A:”Where id<5″,B:”Where name=’a’”
    如果Where条件前没有指定Schema名或者表名的话,默认就是针对当前所有要被 导出的表。如:Query=Where id <5
    Warning: 建议把Query参数放入到参数文件中使用,以避免转义符带来的麻烦。
    2. Sample
    该参数用来指定导出数据的百分比,可指定的值的范围从0.000001到99.999999,语法如下:
    Sample=[[Schema_name.]Table_name:]sample_percent
    指 定该参数以后,EXPDP导出将自动控制导出的记录量,如导出A表中50%的记录,设置的Sample参数如下:
    Sample=A:50
    Warning: Sample_percent指定的值只是一个参考值,EXPDP会根据数据量算出一个近似值。
    过滤对象
    过滤对象主要依赖于Include和Exclude两个参数。这两个参数作用正好相反,在这两个参数中,可以指定你知道的任何对象 类型(如:Package、Procedure、Table等等)或者对象名称(支持通配符)
    1. Exclude 反规则
    指定不被包含的对象类型或者对象名称。指定了该参数以后,指定的对象类型对应的所有对象都不会被导入或导出。 如果被排除的对象有依赖的对象,那么其依赖的 对象也不会被导入或导出。如:通过Exclude参数指定不导出表对象的话,不仅指定的表不会被导出,连这些表关联的Index、Check等都不会被导出。Warning: 建议把Exclude参数放入到参数文件中使用,以避免转义符带来的麻烦。
    2. Include 正规则
    与Exclude正好相反。指定包含的对象类型或者对象名称。
    Warning: 由于两个参数功能正好相反,因此在执行导入或导出命令时,两个参数不能同时使用,否则Oracle也不知道你想要干什么啦。
    exclude/include参数用法:
    EXCLUDE=[object_type]:[name_clause],[object_type]:[name_clause]  –>排出特定对象
    INCLUDE=[object_type]:[name_clause],[object_type]:[name_clause]  –>包含特定对象
    object_type子句用于指定对象的类型,如table,sequence,view,procedure,package等等
    name_clause子句可以为SQL表达式用于过滤特定的对象名字。它由SQL操作符以及对象名(可使用通配符)来过滤指定对象类型中的特定对象。
    当未指定name_clause而仅仅指定object_type则所有该类型的对象都将被过滤或筛选。多个[object_type]:[name_clause]中间以逗号分割。
    示例:
    expdp <other_parameters> SCHEMAS=scott EXCLUDE=SEQUENCE,TABLE:”IN (‘EMP’,’DEPT’)”
    impdp <other_parameters> SCHEMAS=scott INCLUDE=PACKAGE,FUNCTION,PROCEDURE,TABLE:”=’EMP'”
    常用的过滤SQL表达式
    EXCLUDE=SEQUENCE,VIEW                          –过滤所有的SEQUENCE,VIEW
    EXCLUDE=TABLE:”IN (‘EMP’,’DEPT’)”               –过滤表对象EMP,DEPT
    EXCLUDE=SEQUENCE,VIEW,TABLE:”IN (‘EMP’,’DEPT’)” –过滤所有的SEQUENCE,VIEW以及表对象EMP,DEPT
    EXCLUDE=INDEX:”= ‘INDX_NAME'”                   –过滤指定的索引对象INDX_NAME
    INCLUDE=PROCEDURE:”LIKE ‘PROC_U%'”              –包含以PROC_U开头的所有存储过程(_ 符号代表任意单个字符)
    INCLUDE=TABLE:”> ‘E’ ”                          –包含大于字符E的所有表对象
    其它常用操作符 NOT IN, NOT LIKE, <, != 等等
    直接将过滤操作符封装到参数文件中,如下面的例子
    Parameter file:exp_scott.par
    DIRECTORY = dump_scott
    DUMPFILE = exp_scott_%U.dmp
    LOGFILE = exp_scott.log
    SCHEMAS = scott
    PARALLEL= 2
    EXCLUDE = TABLE:”IN (‘EMP’, ‘DEPT’)”
    expdp system/manager parfile=exp.par
    命令行下转义符的处理
    Windows平台:
    D:\> expdp system/manager DIRECTORY=my_dir DUMPFILE=exp_tab.dmp LOGFILE=exp_tab.log SCHEMAS=scott
    INCLUDE=TABLE:\”IN (‘EMP’, ‘DEPT’)\”
    在Windows平台下,需要对象双引号进行转义,使用转义符\
    Unix平台:
    在未使用parfile文件的情形下,所有的符号都需要进行转义,包括括号,双引号,单引号等
    % expdp system/manager DIRECTORY=my_dir DUMPFILE=exp_tab.dmp LOGFILE=exp_tab.log SCHEMAS=scott
    INCLUDE=TABLE:\”IN \(\’EMP\’, \’DEP\’\)\”
    exclude/include常见的错误
    任意需要转义的字符如果未转义或转义错误,都会产生ORA错误。下面给出几种常见的ORA错误。
    ORA-39001: invalid argument value
    ORA-39071: Value for INCLUDE is badly formed.
    ORA-00936: missing expression
    ORA-39001: invalid argument value
    ORA-39071: Value for EXCLUDE is badly formed.
    ORA-00904: “DEPT”: invalid identifier
    ORA-39001: invalid argument value
    ORA-39041: Filter “INCLUDE” either identifies all object types or no object types.
    ORA-39001: invalid argument value
    ORA-39041: Filter “EXCLUDE” either identifies all object types or no object types
    ORA-39001: invalid argument value
    ORA-39038: Object path “USER” is not supported for TABLE jobs.
    高级过滤
    在导出/导入的时候,我们常常有这样的需求,只想导出/导入表结构,或者只想导出/导入数据。幸运的是数据泵也提供了该功能。使用 Content参数。该参数有三个属性
    1) ALL : 导出/导入对象定义和数据,该参数的默认值就是ALL
    2) DATA_ONLY : 只导出/导入数据。
    3) METADATA_ONLY : 只导出/导入对象定义。
    Warning: 有一点值得注意的时,在执行导出的时候,如果使用了高级过滤,如只导出了数据,那么导入时,需要确保数据定义已经存在。否则数据都变成没有主子了。如果数 据定义已经存在,导入时最好指定data_only,否则会触发ORA-39151错误,因为对象已经存在了。
    过滤已经存在的数据
    我们知道,导入的表对象在目标库中已经存在,并且目标端没有创建数据完整性约束条件(RI)来检验数据的话,就有可能造 成数据被重复导入。数据泵提供了一 个新的参数Table_exists_action,可以一定程度上降低重复数据的产生。该参数用来控制如果要导入的表对象存在,执行什么操作。有以下几 个参数值:
    1) SKIP : 跳过该表,继续处理下一个对象。该参数默认就是SKIP。值得注意的是,如果你同时指定了CONTENT参数为Data_only的话,SKIP参数无 效,默认为APPEND。
    2) APPEND : 向现有的表中添加数据。
    3) TRUNCATE : TRUNCATE当前表,然后再添加记录。使用这个参数需要谨慎,除非确认当前表中的数据确实无用。否则可能造成数据丢失。
    4) REPLACE : 删除并重建表对象,然后再向其中添加数据。值得注意的是,如果同时指定了CONTENT参数为Data_only的话,REPLACE参数无效。

    [转]SQL Server系列之 删除大量数据

    目录 SQL Server, 数据库
    转自:http://www.cnblogs.com/panchunting/archive/2013/04/27/SQL_Tech_001.htm
    一、写在前面 – 想说爱你不容易
    为了升级数据库至SQL Server 2008 R2,拿了一台现有的PC做测试,数据库从正式库Restore(3个数据库大小夸张地达到100G+),而机器内存只有可怜的4G,不仅要承担DB Server角色,同时也要作为Web Server,可想而知这台机器的命运是及其惨烈的,只要MS SQL Server一启动,内存使用率立马飙升至99%。没办法,只能升内存,两根8G共16G的内存换上,结果还是一样,内存瞬间被秒杀(CPU利用率在0% 徘徊)。由于是PC机,内存插槽共俩,目前市面上最大的单根内存为16G(价格1K+),就算买回来估计内存还是不够(卧槽,PC机伤不起啊),看样子别 无它法 — 删数据!!!删除数据 – 说的容易, 不就是DELETE吗?靠,如果真这么干,我XXX估计能“知道上海凌晨4点的样子”(KB,Sorry,谁让我是XXX的Programmer,哥在这方面绝对比你牛X),而且估计会暴库(磁盘空间不足,产生的日志文件太大了)。二、沙场点兵 – 众里寻他千百度

    为了更好地阐述我所遇到的困难和问题,有必要做一些必要的测试和说明,同时这也是对如何解决问题的一种探究。因为毕竟这个问题的根本是如何来更 好更快的操作数据,说到底就是DELETE、UPDATE、INSERT、TRUNCATE、DROP等的优化操作组合,我们的目的就是找出最优最快最好 的方法。为了便于测试,准备了一张测试表Employee

    1. 数据插入PK

    1.1. 循环插入,执行时间为38026毫秒

    1.2. 事务循环插入,执行时间为6640毫秒

    1.3. 批量插入,执行时间为220毫秒

    1.4. CTE插入,执行时间也为220毫秒

    小结:

    • 按执行时间,效率依次为:CTE和批量插入效率相当,速度最快,事务插入次之,单循环插入速度最慢;
    • 单循环插入速度最慢是由于INSERT每次都有日志,事务插入大大减少了写入日志次数,批量插入只有一次日志,CTE的基础是CLR,善用速度是最快的。

     

    2. 数据删除PK

    2.1. 循环删除,执行时间为1240毫秒

    2.2. 批量删除,执行时间为106毫秒

    2.3. TRUNCATE删除,执行时间为0毫秒

    小结:

    • TRUNCATE太快了,清除10W数据一点没压力,批量删除次之,最后的DELTE太慢了;
    • TRUNCATE快是因为它属于DDL语句,只会产生极少的日志,普通的DELETE不仅会产生日志,而且会锁记录。

     

    三、磨刀霍霍 – 犹抱琵琶半遮面

    由上面的第二点我们知道,插入最快和删除最快的方式分别是批量插入和TRUNCATE,所以为了达到删除大数据的目的,我们也将采用这两种方式 的组合,其中心思想是先把需要保留的数据存放之新表中,然后TRUNCATE原表中的数据,最后再批量把数据插回去,当然实现方式也可以随便变通。

    1. 保留需要的数据之新表中->TRUNCATE原表数据->还原之前保留的数据之原表中

    脚本类似如下

    SELECT * INTO #keep FROM Original WHERE CreateDate > '2011-12-31'
    TRUNCATE TABLE Original
    INSERT Original SELECT * FROM #keep

    第一条语句会把所有要保留的数据先存放至表#keep中(表#keep无需手工创建,由SELECT INTO生效),#keep会Copy原始表Original的表结构。PS:如果你只想创建表结构,但不拷贝数据,则对应的脚本如下

    SELECT * INTO #keep FROM Original WHERE 1 = 2

    第二条语句用于清除整个表中数据,产生的日志文件基本可以忽略;第三条语句用于还原保留数据。

    几点说明:

    • 你可以不用SELECT INTO,自己通过写脚本(或拷贝现有表)来创建#keep,但是后者有一个弊端,即无法通过SQL脚本来获得对应的表生成Script(我的意思是和原 有表完全一致的脚本,即基本列,属性,索引,约束等),而且当要操作的表比较多时,估计你肯定会抓狂;
    • 既然第一点欠妥,那考虑新建一个同样的数据库怎么样?既可以使用现有脚本,而且生成的数据库基本一致,但是我告诉你最好别这么做,因为第一要跨库,第二,你得准备足够的磁盘空间。

     

    2. 新建表结构->批量插入需要保留的数据->DROP原表->重命名新表为原表

    CREATE TABLE #keep AS (xxx) xxx — 使用上面提到的方法(使用既有表的创建脚本),但是不能够保证完全一致;

    INSERT #keep SELECT * FROM Original where clause

    DROP TBALE Original

    EXEC SP_RENAME ‘#keep’,’Original’

    这种方式比第一种方法略快点,因为省略了数据还原(即最后一步的数据恢复),但是稍微麻烦点,因为你需要创建一张和以前原有一模一样的表结构,包括基本列、属性、约束、索性等等。

    三、数据收缩 – 秋风少落叶

    数据删除后,发现数据库占用空间大小并没有发生变化,此时我们就用借助强悍的数据收缩功能了,脚本如下,运行时间不定,取决于你的数据库大小,多则几十分钟,少则瞬间秒杀

    DBCC SHRINKDATABASE(DB_NAME)