美文网首页
(七)Flume push 数据到SparkStreaming

(七)Flume push 数据到SparkStreaming

作者: 白面葫芦娃92 | 来源:发表于2018-11-23 23:07 被阅读0次

注意:必须先启动spark app再启动flume(生产上不要使用,因为flume收集来的数据量未知)
1.IDEA开发代码

import org.apache.spark.SparkConf
import org.apache.spark.streaming.flume.FlumeUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}

object StreamingFlumeApp01 {
  def main(args: Array[String]): Unit = {
    val sparkConf = new SparkConf().setMaster("local[2]").setAppName("StreamingFlumeApp01")
    val ssc = new StreamingContext(sparkConf,Seconds(10))

    val lines = FlumeUtils.createStream(ssc,"192.168.137.141",41414)
//    SparkFlumeEvent ==> String
    lines.map(x=>new String(x.event.getBody.array()).trim)
      .flatMap(_.split(",")).map((_,1)).reduceByKey(_+_)
      .print()

    ssc.start()
    ssc.awaitTermination()
  }
}

2.打包提交到spark

[hadoop@hadoop001 ~]$ cd $SPARK_HOME
[hadoop@hadoop001 spark-2.3.1-bin-2.6.0-cdh5.7.0]$ cd bin
[hadoop@hadoop001 bin]$ ./spark-submit --master local[2] \
> --packages org.apache.spark:spark-streaming-flume_2.11:2.3.1 \
> --class com.ruozedata.SparkStreaming.StreamingFlumeApp01 \
> /home/hadoop/lib/spark-train-1.0.jar

3.flume-agent配置文件

avro-sink-agent.sources = netcat-source
avro-sink-agent.sinks = avro-sink
avro-sink-agent.channels = netcat-memory-channel

avro-sink-agent.sources.netcat-source.type = netcat
avro-sink-agent.sources.netcat-source.bind = hadoop001
avro-sink-agent.sources.netcat-source.port = 44444

avro-sink-agent.channels.netcat-memory-channel.type = memory

avro-sink-agent.sinks.avro-sink.type = avro
avro-sink-agent.sinks.avro-sink.hostname = hadoop001
avro-sink-agent.sinks.avro-sink.port = 41414

avro-sink-agent.sources.netcat-source.channels = netcat-memory-channel
avro-sink-agent.sinks.avro-sink.channel = netcat-memory-channel

4.启动flume

[hadoop@hadoop001 bin]$ pwd
/home/hadoop/app/apache-flume-1.6.0-cdh5.7.0-bin/bin
[hadoop@hadoop001 bin]$ ./flume-ng agent \
> --name avro-sink-agent \
> --conf $FLUME_HOME/conf \
> --conf-file /home/hadoop/script/flume/flume_push_streaming.conf \
> -Dflume.root.logger=INFO,console \
> -Dflume.monitoring.type=http \
> -Dflume.monitoring.port=34343

5.启动telnet

[hadoop@hadoop001 ~]$ telnet hadoop001 44444
Trying 192.168.137.141...
Connected to hadoop001.
Escape character is '^]'.
huluwa
OK
spark
OK
huluwa
OK

6.结果:

-------------------------------------------
Time: 1538138030000 ms
-------------------------------------------
(huluwa,2)
(spark,1)

spark-submit里的spark-train-1.0.jar是个瘦包,里边不包括org.apache.spark:spark-streaming-flume_2.11:2.3.1的东西,为了防止--packages org.apache.spark:spark-streaming-flume_2.11:2.3.1在不能联网的情况下无法下载,可以考虑打个胖包,将org.apache.spark:spark-streaming-flume_2.11:2.3.1直接打进jar包里

相关文章

网友评论

      本文标题:(七)Flume push 数据到SparkStreaming

      本文链接:https://www.haomeiwen.com/subject/bqhmoftx.html