Hudi + Spark3 introduction lesson 1
Apache Hudi is the next generation streaming data Lake platform. Apache Hudi will migrate the data warehouse and database core functions to the data lake. Hudi provides tables, transactions, efficient upserts/deletes, advanced indexes, streaming ingestion, data clustering / compression optimization, and concurrency, while using open source file formats for data.
Welcome to visit My blog: https://kelvin-qzy.top
hudi 0.10.1 source code compilation
- maven 3.5.4, spark3.1.1, configured with the maven source of aliyun
- Modify pom The spark version in XML is 3.1.1, which was originally 3.1.2. There is little difference between small versions. It depends on your own environment.
- Compile command MVN clean package -dskiptests -dspala-2.12 -dspark3
- The Hudi spark bundle directory of the compiled product under packaging
- Spark binding jar package name: hudi-spark 3.1.1-bundle_ 2.12-0.10.1. Jar, about 38M in size
The previous version of hudi 0.9.0 has obvious problems when used with spark3.1. It can be used with spark3.0.3. Of course, this is also mentioned in hudi's release notes.
- hudi-spark3.1.2-bundle_2.12-0.10.1.jar
- hudi-spark3.0.3-bundle_2.12-0.10.1.jar
These two packages can be compiled from maven Central warehouse Get it. (the page is hard to find. hudi has to sort out the warehouse categories.) post it.
# Version 3.1.2 <!-- https://mvnrepository.com/artifact/org.apache.hudi/hudi-spark3.1.2-bundle --> <dependency> <groupId>org.apache.hudi</groupId> <artifactId>hudi-spark3.1.2-bundle_2.12</artifactId> <version>0.10.1</version> </dependency> # Version 3.0.3 <!-- https://mvnrepository.com/artifact/org.apache.hudi/hudi-spark3.0.3-bundle --> <dependency> <groupId>org.apache.hudi</groupId> <artifactId>hudi-spark3.0.3-bundle_2.12</artifactId> <version>0.10.1</version> </dependency>
Using the above precompiled package, you can omit the process of compiling yourself.
Support matrix published on the official website:
Spark 3 Support Matrix
Hudi | Supported Spark 3 versions |
---|---|
0.10.0 - 0.10.1 | 3.1.x (default build), 3.0.x |
0.7.0 - 0.9.0 | 3.0.x |
0.6.0 and prior | Not supported |
You can see that the default build of hudi 0.10 is spark3.1, and you can also build spark3.0.
testing procedure
environmental information
- Spark3.1.1 is installed
- Hive3.1 installed
- Operating system: CentOS 7.4
- Java 8
Spark3 quick test
-
Copy the hudi jar to jars in the spark installation directory, for example
cp hudi-spark3.1.1-bundle_2.12-0.10.1.jar /usr/hdp/3.0.1.0-187/spark3/jars
-
Start spark SQL client to see if it is normal:
-
Because we have put the jar of Hudi spark into the jar package loading path of spark, we do not need to specify it explicitly.
-
In addition, if an error of permission class is reported, users with hive access can be switched. Here is the operation performed by using hive users.
./bin/spark-sql --master yarn --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' --conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension'
-
-
Display > wait for command input! (at this time, I haven't copied the avro package, and I haven't reported any errors.)
-
Create a non partitioned hudi table:
The default is a cow format table. The default primary key is uuid, and there is no pre aggregation field.
create table hudi_cow_nonpcf_tbl ( uuid int, name string, price double ) using hudi;
-
View the table: show tables. You can see the table name in a row: default hudi_table0 false
-
Insert 2 pieces of data into the table:
insert into hudi_table0 select 1, 'my name is kiki', 20; insert into hudi_table0 select 2, 'qiqi', 16;
-
Query the data just now:
select * from hudi_table0; ...... Time taken 0.361 seconds,Fetched 2 row(s)
-
Try the ox knife, ok.