問題分析:
hive中分區表其底層就是HDFS中的多個目錄下的單個文件,hive導出數據本質是將HDFS中的文件導出
hive中的分區表,因為分區字段(靜態分區)不在文件中,所以在sqoop導出的時候,無法將分區字段進行直接導出
思路:在hive中創建一個臨時表,將分區表復制過去后分區字段轉換為普通字段,然后再用sqoop將tmp表導出即實現需求
步湊如下:
文章目錄
1.創建目標表(分區表)
1.1查看表結構
2.導入數據
3.查詢表dept_partition
4.創建臨時表 tmp_dept_partition
5.查詢臨時表
6.查看表結構(這個時候分區表已經轉換為非分區表了)
7.mysql中建表 dept_partition
8.使用sqoop導入到MySQL
8.Mysql查詢驗證是否成功導出
1.創建目標表(分區表)
hive> CREATE TABLE `dept_partition`(
`deptno` int,
`dname` string,
`loc` string)
PARTITIONED BY (`month` string) row format delimited fields terminated by '\t';
1
2
3
4
5
1
2
3
4
5
1.1查看表結構
hive> show create table dept_partition;
1
1
+----------------------------------------------------+--+
createtab_stmt
+----------------------------------------------------+--+
CREATE TABLE `dept_partition`(
`deptno` int,
`dname` string,
`loc` string)
PARTITIONED BY (
`month` string)
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
2.導入數據
hive> load data inpath '/user/hive/hive_db/data/dept.txt' into table dept_partition;
1
1
10 ACCOUNTING 1700
20 RESEARCH 1800
30 SALES 1900
40 OPERATIONS 1700
1
2
3
4
1
2
3
4
3.查詢表dept_partition
hive> select * from dept_partition;
1
1
+------------------------+-----------------------+---------------------+-----------------------+--+
dept_partition.deptno | dept_partition.dname | dept_partition.loc | dept_partition.month
+------------------------+-----------------------+---------------------+-----------------------+--+
10 | ACCOUNTING | 1700 | 2019-10-19
20 | RESEARCH | 1800 | 2019-10-19
30 | SALES | 1900 | 2019-10-19
40 | OPERATIONS | 1700 | 2019-10-19
10 | ACCOUNTING | 1700 | 2019-10-20
20 | RESEARCH | 1800 | 2019-10-20
30 | SALES | 1900 | 2019-10-20
40 | OPERATIONS | 1700 | 2019-10-20
+------------------------+-----------------------+---------------------+-----------------------+--+
1
2
3
4
5
6
7
8
9
10
11
12
1
2
3
4
5
6
7
8
9
10
11
12
4.創建臨時表 tmp_dept_partition
hive> create table tmp_dept_partition as select * from dept_partition;
1
1
5.查詢臨時表
hive> select * from tmp_dept_partition;
1
1
+----------------------------+---------------------------+-------------------------+---------------------------+--+
tmp_dept_partition.deptno | tmp_dept_partition.dname | tmp_dept_partition.loc | tmp_dept_partition.month
+----------------------------+---------------------------+-------------------------+---------------------------+--+
10 | ACCOUNTING | 1700 | 2019-10-19
20 | RESEARCH | 1800 | 2019-10-19
30 | SALES | 1900 | 2019-10-19
40 | OPERATIONS | 1700 | 2019-10-19
10 | ACCOUNTING | 1700 | 2019-10-20
20 | RESEARCH | 1800 | 2019-10-20
30 | SALES | 1900 | 2019-10-20
40 | OPERATIONS | 1700 | 2019-10-20
+----------------------------+---------------------------+-------------------------+---------------------------+--+
1
2
3
4
5
6
7
8
9
10
11
12
1
2
3
4
5
6
7
8
9
10
11
12
6.查看表結構(這個時候分區表已經轉換為非分區表了)
hive> show create table tmp_dept_partition;
1
1
+----------------------------------------------------+--+
createtab_stmt
+----------------------------------------------------+--+
CREATE TABLE `tmp_dept_partition`(
`deptno` int,
`dname` string,
`loc` string,
`month` string)
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
7.MySQL中建表 dept_partition
mysql> drop table if exists dept_partition;
create table dept_partition(
`deptno` int,
`dname` varchar(20),
`loc` varchar(20),
`month` varchar(50))
1
2
3
4
5
6
1
2
3
4
5
6
8.使用sqoop導入到MySQL
bin/sqoop export \
--connect jdbc:mysql://hadoop01:3306/partitionTb \
--username root \
--password 123456 \
--table dept_partition \
--num-mappers 1 \
--export-dir /user/hive/warehouse/hive_db.db/tmp_dept_partition \
--input-fields-terminated-by "\001"
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
8.Mysql查詢驗證是否成功導出
mysql> select * from dept_partition;
1
1
+--------+------------+------+------------+
deptno | dname | loc | month
+--------+------------+------+------------+
10 | ACCOUNTING | 1700 | 2019-10-19
20 | RESEARCH | 1800 | 2019-10-19
30 | SALES | 1900 | 2019-10-19
40 | OPERATIONS | 1700 | 2019-10-19
10 | ACCOUNTING | 1700 | 2019-10-20
20 | RESEARCH | 1800 | 2019-10-20
30 | SALES | 1900 | 2019-10-20
40 | OPERATIONS | 1700 | 2019-10-20
+---