美文网首页
wordCount3 (aggregateByKey)

wordCount3 (aggregateByKey)

作者: yayooo | 来源:发表于2019-07-31 00:06 被阅读0次

初始值为0,分区内相加,分区间相加。

package com.atguigu

import org.apache.spark.rdd.RDD
import org.apache.spark.{HashPartitioner, Partitioner, SparkConf, SparkContext}

object Trans {
  def main(args: Array[String]): Unit = {

    val conf: SparkConf = new SparkConf().setMaster("local[*]").setAppName("Spark01_Partition")
    //构建spark上下文对象
    val sc = new SparkContext(conf)

    val rdd: RDD[(String, Int)] = sc.makeRDD(List(("a",1),("b",2),("b",3),("a",3),("b",4),("a",5)),2)
    val rdd2: RDD[(String, Int)] = rdd.aggregateByKey(0)((x,y) => {x+y},(x,y) => {x+y})
    rdd2.collect().foreach(println)

    sc.stop()
  }
}

(b,9)
(a,9)

相关文章

网友评论

      本文标题:wordCount3 (aggregateByKey)

      本文链接:https://www.haomeiwen.com/subject/ujpnrctx.html