美文网首页
用distcp在hdfs和S3之间数据传输

用distcp在hdfs和S3之间数据传输

作者: xuefly | 来源:发表于2019-04-10 12:18 被阅读0次

隶属于文章系列:大数据安全实战 https://www.jianshu.com/p/76627fd8399c

  • hdfs和s3之间的带宽是复制速度的上限。
    • -m <num_maps>
    • bandwidth 带宽
    • 增加mapper的个数,和每个mapper的带宽可能会提高传输速度,但是 mapper个数 * bandwidth选项,不会超过hdfs和s3带宽的上限。
  • hadoop集群离s3越远,带宽越小,复制越慢。在同一区域的s3 和主机往往能提高速度。
  • 但是即便是hadoop同样也部署云基础设施中,有可能因触发s3的速度限制而导致数据复制慢。对一个目录树的负载过重,S3可能会可能会拖延处理或者拒绝。
    • 很多mapper的大数据量的复制操作,可能会减慢向S3的上传。
    • 当增减mapper的个数实际上在减慢复制速度的时候,可能的原因就是触发了限制。
  • -p[rbugpcaxt]选项对s3来说没有意义,但是用了之后会让每次数据传输都是全量复制。
hive@cdh-slave01:/tmp$ hadoop distcp -Dfs.s3a.access.key=fdsfdsfdsfsdIMNH6ZBsdf  -Dfs.s3a.secret.key=fasdfds  -update/user/hive/warehouse/ads.db  s3a://bucket/user/hive/warehouse/ads.db
19/04/10 06:59:01 INFO tools.OptionsParser: parseChunkSize: blocksperchunk false
19/04/10 06:59:02 WARN impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
19/04/10 06:59:02 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
19/04/10 06:59:02 INFO impl.MetricsSystemImpl: s3a-file-system metrics system started
19/04/10 06:59:03 INFO Configuration.deprecation: fs.s3a.server-side-encryption-key is deprecated. Instead, use fs.s3a.server-side-encryption.key
19/04/10 06:59:03 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, ignoreFailures=false, overwrite=false, append=false, useDiff=false, useRdiff=false, fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, numListstatusThreads=0, maxMaps=20, mapBandwidth=100, sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], preserveRawXattrs=false, atomicWorkPath=null,logPath=null, sourceFileListing=null, sourcePaths=[/user/hive/warehouse/ads.db], targetPath=s3a://bucket/user/hive/warehouse/ads.db, targetPathExists=true, filtersFile='null', blocksPerChunk=0, copyBufferSize=8192}
19/04/10 06:59:04 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 2843; dirCnt = 19
19/04/10 06:59:04 INFO tools.SimpleCopyListing: Build file listing completed.
19/04/10 06:59:04 INFO Configuration.deprecation: io.sort.mb is deprecated. Instead, use mapreduce.task.io.sort.mb
19/04/10 06:59:04 INFO Configuration.deprecation: io.sort.factor is deprecated. Instead, use mapreduce.task.io.sort.factor
19/04/10 06:59:04 INFO tools.DistCp: Number of paths in the copy list: 2843
19/04/10 06:59:04 INFO tools.DistCp: Number of paths in the copy list: 2843
19/04/10 06:59:04 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm721
19/04/10 06:59:04 INFO mapreduce.JobSubmitter: number of splits:21
19/04/10 06:59:04 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1554779117367_2179
19/04/10 06:59:05 INFO impl.YarnClientImpl: Submitted application application_1554779117367_2179
19/04/10 06:59:05 INFO mapreduce.Job: The url to track the job: http://cdh-master2:8088/proxy/application_1554779117367_2179/
19/04/10 06:59:05 INFO tools.DistCp: DistCp job-id: job_1554779117367_2179
19/04/10 06:59:05 INFO mapreduce.Job: Running job: job_1554779117367_2179
19/04/10 06:59:10 INFO mapreduce.Job: Job job_1554779117367_2179 running in uber mode : false
19/04/10 06:59:10 INFO mapreduce.Job:  map 0% reduce 0%
19/04/10 06:59:18 INFO mapreduce.Job:  map 10% reduce 0%
19/04/10 06:59:19 INFO mapreduce.Job:  map 19% reduce 0%
19/04/10 06:59:20 INFO mapreduce.Job:  map 52% reduce 0%
19/04/10 06:59:21 INFO mapreduce.Job:  map 67% reduce 0%
19/04/10 06:59:22 INFO mapreduce.Job:  map 86% reduce 0%
19/04/10 06:59:23 INFO mapreduce.Job:  map 95% reduce 0%
19/04/10 06:59:24 INFO mapreduce.Job:  map 100% reduce 0%
19/04/10 06:59:25 INFO mapreduce.Job: Job job_1554779117367_2179 completed successfully
19/04/10 06:59:25 INFO mapreduce.Job: Counters: 40
    File System Counters
        FILE: Number of bytes read=0
        FILE: Number of bytes written=3304256
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=985334
        HDFS: Number of bytes written=413302
        HDFS: Number of read operations=5797
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=42
        S3A: Number of bytes read=0
        S3A: Number of bytes written=32647
        S3A: Number of read operations=3014
        S3A: Number of large read operations=0
        S3A: Number of write operations=37
    Job Counters
        Launched map tasks=21
        Other local map tasks=21
        Total time spent by all maps in occupied slots (ms)=183711
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=183711
        Total vcore-milliseconds taken by all map tasks=183711
        Total megabyte-milliseconds taken by all map tasks=564360192
    Map-Reduce Framework
        Map input records=2843
        Map output records=2822
        Input split bytes=2394
        Spilled Records=0
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=9536
        CPU time spent (ms)=136920
        Physical memory (bytes) snapshot=9398771712
        Virtual memory (bytes) snapshot=56006500352
        Total committed heap usage (bytes)=15602810880
    File Input Format Counters
        Bytes Read=950293
    File Output Format Counters
        Bytes Written=413302
    DistCp Counters
        Bytes Copied=32647
        Bytes Expected=32647
        Bytes Skipped=1800151
        Files Copied=21
        Files Skipped=2822
19/04/10 06:59:25 INFO impl.MetricsSystemImpl: Stopping s3a-file-system metrics system...
19/04/10 06:59:25 INFO impl.MetricsSystemImpl: s3a-file-system metrics system stopped.
19/04/10 06:59:25 INFO impl.MetricsSystemImpl: s3a-file-system metrics system shutdown complete.

#在命令行查看目录个数 文件个数 目录总的字节数
hive@cdh-slave01:/tmp$ hdfs dfs -count /user/hive/warehouse/ads.db
          20         2824            1832798 /user/hive/warehouse/ads.db

当distcp计数器中有skipped这一行的时候,才是复制的增量,不然就是复制全量。
验证方法:
- 运行输出中:
Number of paths in the copy list: 2843包含了文件和目录的总数=Files Copied=+Files Skipped=files num+dir num
目录总的字节数=Bytes Copied=+Bytes Skipped

参考

  • distcp官网文档

  • ​Improving DistCp Performance

  • hadoop fs -count < hdfs path >

    统计hdfs对应路径下的目录个数,文件个数,文件总计大小

    显示为目录个数,文件个数,文件总计大小,输入路径

    例如:

    hadoop fs -count /data/dltb3yi/
    
     1        24000       253953854502 /data/dltb3yi/      获得24000个文件
    
    

相关文章

  • 用distcp在hdfs和S3之间数据传输

    隶属于文章系列:大数据安全实战 https://www.jianshu.com/p/76627fd8399c hd...

  • Hadoop深入研究一

    Distcp 用于在两个多个集群之间进行数据的迁移,复制文件 hadoop distcp hdfs://namen...

  • HDFS中两个集群数据文件拷贝的方式

    在不同的两个HDFS集群中拷贝数据,我们可以使用distcp,集群之间拷贝数据的正确姿势是: 上面的意思是将集群m...

  • hadoop集群 distcp 缓慢 两个hadoop集群之间使用distcp拷贝时,发现集群之间拷贝数据缓慢,最...

  • 十一、备份与恢复

    1、HBase发展历程中的集中备份方式 1.1、使用distcp进行关机全备份。HBase所有文件都存在HDFS,...

  • JindoFS:云原生的大数据计算存储分离方案

    JindoFS 之前 在 JindoFS 之前,云上客户主要使用 HDFS 和 OSS/S3 作为大数据存储。HD...

  • Hadoop命令之distcp参考

    distcp命令是用于集群内部或者集群之间拷贝数据的常用命令。 #顾名思义:dist即分布式, distcp即分布...

  • HDFS--DistCP远程复制文件

    1、简介2、使用方式3、常见问题-3.1、需要远程复制的文件没有关闭,还处于写的状态-3.2、带宽问题 1、简介 ...

  • HDFS(6)- distcp并行复制

    我们可以用Java Api写代码进行复制文件或目录,也可以使用hadoop fs -cp进行复制,可这两种效率并不...

  • HDFS EC 对 distcp 的影响

    distcp 在拷贝一个文件结束后,会对比源文件和目标文件的校验值,判断两者是否一致。其中文件的校验值通过 Fil...

网友评论

      本文标题:用distcp在hdfs和S3之间数据传输

      本文链接:https://www.haomeiwen.com/subject/fkrniqtx.html