Powered By Blogger

Sunday, November 3, 2019

sqoop job - reading password from local file system

[cloudera@quickstart Downloads]$ pwd
/home/cloudera/Downloads
[cloudera@quickstart Downloads]$ echo -n "cloudera" >> password-file
[cloudera@quickstart Downloads]$ sqoop job \
> --create job_banking_member_details_pwd \
> -- import \
> --connect jdbc:mysql://quickstart.cloudera:3306/banking \
> --username root \
> --password-file file:///home/cloudera/Downloads/password-file \
> --table member_details \
> --warehouse-dir /data/banking3 \
> --incremental append \
> --check-column card_id \
> --last-value 0
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
19/11/03 03:57:25 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.0
[cloudera@quickstart Downloads]$ sqoop job --exec job_banking_member_details_pwd
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
19/11/03 03:57:50 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.0
19/11/03 03:57:52 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
19/11/03 03:57:52 INFO tool.CodeGenTool: Beginning code generation
19/11/03 03:57:53 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `member_details` AS t LIMIT 1
19/11/03 03:57:53 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `member_details` AS t LIMIT 1
19/11/03 03:57:53 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
Note: /tmp/sqoop-cloudera/compile/bfbad505a52015aa1d6dc36005179a72/member_details.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
19/11/03 03:57:55 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/bfbad505a52015aa1d6dc36005179a72/member_details.jar
19/11/03 03:57:56 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`card_id`) FROM `member_details`
19/11/03 03:57:56 INFO tool.ImportTool: Incremental import based on column `card_id`
19/11/03 03:57:56 INFO tool.ImportTool: Lower bound value: 0
19/11/03 03:57:56 INFO tool.ImportTool: Upper bound value: 6599900931314251
19/11/03 03:57:56 WARN manager.MySQLManager: It looks like you are importing from mysql.
19/11/03 03:57:56 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
19/11/03 03:57:56 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
19/11/03 03:57:56 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
19/11/03 03:57:56 INFO mapreduce.ImportJobBase: Beginning import of member_details
19/11/03 03:57:56 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
19/11/03 03:57:56 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
19/11/03 03:57:56 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
19/11/03 03:57:56 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
19/11/03 03:57:59 INFO db.DBInputFormat: Using read commited transaction isolation
19/11/03 03:57:59 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`card_id`), MAX(`card_id`) FROM `member_details` WHERE ( `card_id` > 0 AND `card_id` <= 6599900931314251 )
19/11/03 03:57:59 INFO db.IntegerSplitter: Split size: 1564968116401259; Num splits: 4 from: 340028465709212 to: 6599900931314251
19/11/03 03:57:59 INFO mapreduce.JobSubmitter: number of splits:4
19/11/03 03:58:00 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1572771724749_0012
19/11/03 03:58:00 INFO impl.YarnClientImpl: Submitted application application_1572771724749_0012
19/11/03 03:58:00 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1572771724749_0012/
19/11/03 03:58:00 INFO mapreduce.Job: Running job: job_1572771724749_0012
19/11/03 03:58:10 INFO mapreduce.Job: Job job_1572771724749_0012 running in uber mode : false
19/11/03 03:58:10 INFO mapreduce.Job:  map 0% reduce 0%
19/11/03 03:58:22 INFO mapreduce.Job:  map 25% reduce 0%
19/11/03 03:58:25 INFO mapreduce.Job:  map 50% reduce 0%
19/11/03 03:58:26 INFO mapreduce.Job:  map 75% reduce 0%
19/11/03 03:58:27 INFO mapreduce.Job:  map 100% reduce 0%
19/11/03 03:58:28 INFO mapreduce.Job: Job job_1572771724749_0012 completed successfully
19/11/03 03:58:28 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=690664
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=552
HDFS: Number of bytes written=84970
HDFS: Number of read operations=16
HDFS: Number of large read operations=0
HDFS: Number of write operations=8
Job Counters
Launched map tasks=4
Other local map tasks=4
Total time spent by all maps in occupied slots (ms)=46545
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=46545
Total vcore-milliseconds taken by all map tasks=46545
Total megabyte-milliseconds taken by all map tasks=47662080
Map-Reduce Framework
Map input records=999
Map output records=999
Input split bytes=552
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=406
CPU time spent (ms)=6530
Physical memory (bytes) snapshot=796741632
Virtual memory (bytes) snapshot=6281330688
Total committed heap usage (bytes)=811073536
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=84970
19/11/03 03:58:28 INFO mapreduce.ImportJobBase: Transferred 82.9785 KB in 31.7888 seconds (2.6103 KB/sec)
19/11/03 03:58:28 INFO mapreduce.ImportJobBase: Retrieved 999 records.
19/11/03 03:58:28 INFO util.AppendUtils: Creating missing output directory - member_details
19/11/03 03:58:28 INFO tool.ImportTool: Saving incremental import state to the metastore
19/11/03 03:58:28 INFO tool.ImportTool: Updated data for job: job_banking_member_details_pwd
[cloudera@quickstart Downloads]$ sqoop job --exec job_banking_member_details_pwd
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
19/11/03 03:58:37 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.0
19/11/03 03:58:39 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
19/11/03 03:58:39 INFO tool.CodeGenTool: Beginning code generation
19/11/03 03:58:39 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `member_details` AS t LIMIT 1
19/11/03 03:58:39 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `member_details` AS t LIMIT 1
19/11/03 03:58:39 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
Note: /tmp/sqoop-cloudera/compile/4cb02b1e7eba7623f4eaf339df186d0d/member_details.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
19/11/03 03:58:42 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/4cb02b1e7eba7623f4eaf339df186d0d/member_details.jar
19/11/03 03:58:43 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`card_id`) FROM `member_details`
19/11/03 03:58:43 INFO tool.ImportTool: Incremental import based on column `card_id`
19/11/03 03:58:43 INFO tool.ImportTool: No new rows detected since last import.
[cloudera@quickstart Downloads]$


Observe on the rerun latest value of card_id is been fetched from metastore.
Reading the password from local file system.

We can delete sqoop job using
sqoop job --delete

No comments:

Post a Comment