为什么在Spark中,lines.map不起作用,而lines.take.map却能工作?

我是Scala和Spark的新手。

我正在练习使用SparkHdfsLR.scala代码

但我在代码的这一部分遇到了问题:

60    val lines = sc.textFile(inputPath)61    val points = lines.map(parsePoint _).cache()62    val ITERATIONS = args(2).toInt

第61行不起作用。在我将其更改为以下内容后:

60    val lines = sc.textFile(inputPath)61    val points = lines.take(149800).map(parsePoint _)  //149800是总行数62    val ITERATIONS = args(2).toInt

sbt运行时的错误信息是:

[error] (run-main) org.apache.spark.SparkException: Job failed: Task 0.0:1 failed more than 4 timesorg.apache.spark.SparkException: Job failed: Task 0.0:1 failed more than 4 timesat org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:760)at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:758)at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:60)at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:758)at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:379)at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$run(DAGScheduler.scala:441)at org.apache.spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:149)java.lang.RuntimeException: Nonzero exit code: 1at scala.sys.package$.error(package.scala:27)[error] {file:/var/sdb/home/tim.tan/workspace/spark/}default-d3d73f/compile:run: Nonzero exit code: 1[error] Total time: 52 s, completed Dec 20, 2013 5:42:18 PM

任务节点的标准错误输出是:

13/12/20 17:42:16 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started13/12/20 17:42:16 INFO executor.StandaloneExecutorBackend: Connecting to driver: akka://spark@SHXJ-H07-SDB06:38975/user/StandaloneScheduler13/12/20 17:42:17 INFO executor.StandaloneExecutorBackend: Successfully registered with driver13/12/20 17:42:17 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started13/12/20 17:42:17 INFO spark.SparkEnv: Connecting to BlockManagerMaster: akka://spark@SHXJ-H07-SDB06:38975/user/BlockManagerMaster13/12/20 17:42:17 INFO storage.MemoryStore: MemoryStore started with capacity 323.9 MB.13/12/20 17:42:17 INFO storage.DiskStore: Created local directory at /tmp/spark-local-20131220174217-be8e13/12/20 17:42:17 INFO network.ConnectionManager: Bound socket to port 52043 with id = ConnectionManagerId(TS-BH90,52043)13/12/20 17:42:17 INFO storage.BlockManagerMaster: Trying to register BlockManager13/12/20 17:42:17 INFO storage.BlockManagerMaster: Registered BlockManager13/12/20 17:42:17 INFO spark.SparkEnv: Connecting to MapOutputTracker: akka://spark@SHXJ-H07-SDB06:38975/user/MapOutputTracker13/12/20 17:42:17 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-1b1a6c0b-965e-4834-a3d3-554c9544204113/12/20 17:42:17 INFO server.Server: jetty-7.x.y-SNAPSHOT13/12/20 17:42:17 INFO server.AbstractConnector: Started [email protected]:4181113/12/20 17:42:18 ERROR executor.StandaloneExecutorBackend: Driver terminated or disconnected! Shutting down.

工作节点的日志如下:

13/12/19 17:49:26 INFO worker.Worker: Asked to launch executor app-20131219174926-0001/2 for SparkHdfsLR13/12/19 17:49:26 INFO worker.ExecutorRunner: Launch command: "java" "-cp" ":/var/bh/spark/conf:/var/bh/spark/assembly/target/scala-2.9.3/spark-assembly-0.8.0-incubating-hadoop1.0.3.jar:/var/bh/spark/core/target/scala-2.9.3/test-classes:/var/bh/spark/repl/target/scala-2.9.3/test-classes:/var/bh/spark/mllib/target/scala-2.9.3/test-classes:/var/bh/spark/bagel/target/scala-2.9.3/test-classes:/var/bh/spark/streaming/target/scala-2.9.3/test-classes" "-Djava.library.path=/var/bh/hadoop/lib/native/Linux-amd64-64/" "-Xms512M" "-Xmx512M" "org.apache.spark.executor.StandaloneExecutorBackend" "akka://spark@SHXJ-H07-SDB06:56158/user/StandaloneScheduler" "2" "TS-BH87" "8"13/12/19 17:49:30 INFO worker.Worker: Asked to kill executor app-20131219174926-0001/213/12/19 17:49:30 INFO worker.ExecutorRunner: Runner thread for executor app-20131219174926-0001/2 interrupted13/12/19 17:49:30 INFO worker.ExecutorRunner: Killing process!

看起来工作节点的加载没有成功启动。

我不知道为什么。有没有人能给我一些建议?


回答:

我找到了它为什么不起作用的原因。

由于一些错误的配置,Spark只能在独立模式下工作。更正配置后,如果你想让代码在分布式模式下运行,最后两个参数必须为SparkContext函数指定:

new SparkContext(master, jobName, [sparkHome], [jars])

如果最后两个参数没有指定,Scala脚本只能在独立模式下工作。

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注