-
Notifications
You must be signed in to change notification settings - Fork 707
Frequently asked questions
Feel free to add new questions and to ping @Scalding for an answer.
Twitter uses it in production all over the place!
Check out our Powered By page for more examples.
See this conversation on Twitter.
Yes! See the cascading-user group discussion. We would like to see someone prepare a patch for scald.rb to handle submission of scalding jobs to EMR.
Scalding complains when I use a TimePathedSource and some of the data is missing. How can I ignore that error?
Pass the option --tool.partialok
to your job and it will ignore any missing data. It's safer to work around by either filling with place-holder empty files, or writing sources that will skip known-missing dates. Using that option by default is very dangerous.
I receive this error when running sbt update
: Error occurred during initialization of VM. Incompatible minimum and maximum heap sizes specified
In your sbt script, set local min=$(( $mem / 2 ))
You want to use GroupBuilder.scanLeft
. A scanLeft
is like a foldLeft
except that you output each intermediate value. Both of these functions are part of the standard Scala library as well. See StackOverflow for scanLeft
examples. For the specific example of moving averages in Scalding, see the cascading-user group discussion.
You can't do that. Instead you should use RichPipe.crossWithTiny to efficiently do a cartesian product of a small set of values to a larger set. The small set might be a single output, from say pipe.groupAll { _.size }
. Alternatively, you might kick off a subsequent job in Job.next
, and use Source.readAtSubmitter
to read the value before you get going (or even in Job.next
to see if you need to kick off the next job).
We recommend cases classes defined outside of your Job. Case classes defined inside your job capture an $outer member variable that references the job that is wasteful for serialization. If you are having stack overflows during case class serialization this is likely your problem. If you have a use case this doesn't cover, email the cascading-user list or mention @scalding. Dealing with serialization issues well in systems like Hadoop is tricky, and we're still improving our approaches.
See the discussion on cascading-user.
hadoop jar myjar \
com.twitter.scalding.Tool \
-D mapred.output.compress=false \
-D mapred.child.java.opts=-Xmx2048m \
-D mapred.reduce.tasks=20 \
com.class.myclass \
--hdfs \
--input $input \
--output $output
If you want to update the jobConf in your job, the way to do it is to override the config method in Job:
If you really want to just read from the jobConf, you can do it with code like:
implicitly[Mode] match {
case Hdfs(_, configuration) => {
// use the configuration which is an instance of Configuration
}
case _ => error("Not running on Hadoop! (maybe cascading local mode?)")
}
See this discussion: https://groups.google.com/forum/?fromgroups=#!topic/cascading-user/YppTLebWds8
class WordCountJob(args : Args) extends Job(args) {
// Prior to 0.9.0 we need the mode, after 0.9.0 mode is a def on Job.
override def config(implicit m: Mode): Map[AnyRef,AnyRef] = {
super.config ++ Map ("my.job.name" -> "my new job name")
}
Warning: this answer refers to the DEPRECATED Fields API.
Many of the examples (e.g. in the tutorial/
directory) show that the fields argument is specified as a Scala Tuple when reading a delimited file. However Scala Tuples are currently limited to a maximum of 22 elements. To read-in a data-set with more than 22 fields, you can use a List of Symbols as fields specifier. E.g.
val mySchema = List('first, 'last, 'phone, 'age, 'country)
val input = Csv("/path/to/file.txt", separator = ",", fields = mySchema)
val output = TextLine("/path/to/out.txt")
input.read
.project('age, 'country)
.write(Tsv(output))
Another way to specify fields is using Scala Enumerations, which is available in the develop
branch (as of Apr 2, 2013), as demonstrated in Tutorial 6:
object Schema extends Enumeration {
val first, last, phone, age, country = Value // arbitrary number of fields
}
import Schema._
Csv("tutorial/data/phones.txt", separator = " ", fields = Schema)
.read
.project(first,age)
.write(Tsv("tutorial/data/output6.tsv"))
The spilling is controlled with the same hadoop option as cascading:
-Dcascading.spill.list.threshold=1000000
Would keep 1 million items in memory.
The rule of thumb is use as much as you can without getting OOM.
You can't set a default for AggregateBy, you need to set it in each reducer by calling spillThreshold function on GroupBuilder. https://github.com/twitter/scalding/blob/develop/scalding-core/src/main/scala/com/twitter/scalding/GroupBuilder.scala#L97
A. If your job has dependencies that clash with Hadoop's, Hadoop can replace your version of a library (like log4j or ASM) with its own native version. You can fix this with an environment flag that makes sure that your jars show up on the classpath before Hadoop's. Set these environment variables:
bash
export HADOOP_CLASSPATH=<your_jar_file>
export HADOOP_USER_CLASSPATH_FIRST=true
A. All fields in Job get serialized and sent to Hadoop. Your job contains an
object that is not serializable, even with Kryo. This issue may exhibit itself
as other exceptions, such as InvocationTargetException
, KryoException
, or
IllegalAccessException
. What all these potential exceptions have in common
is being related to serialization failures during Hadoop job submission.
First, try to figure out which object is causing the problem.
For a better stacktrace than the usual opaque dump, try submitting your job again with the extendedDebugInfo
flag set:
export HADOOP_OPTS="-Dsun.io.serialization.extendedDebugInfo=true"; hadoop <your-commands>
You should see a much larger stacktrace, with many entries like this:
- field (class "com.twitter.scalding.MapsideReduce", name: "commutativeSemigroup", type: "interface com.twitter.algebird.Semigroup")
- object (class "com.twitter.scalding.MapsideReduce", MapsideReduce[decl:'key', 'value'])
- field (class "cascading.pipe.Operator", name: "operation", type: "interface cascading.operation.Operation")
- object (class "cascading.pipe.Each", Each(_pipe_2*_pipe_3)[MapsideReduce[decl:'key', 'value']])
- field (class "org.jgrapht.graph.IntrusiveEdge", name: "target", type: "class java.lang.Object")
- object (class "org.jgrapht.graph.IntrusiveEdge", org.jgrapht.graph.IntrusiveEdge@6ed95e60)
- custom writeObject data (class "java.util.HashMap")
- object (class "java.util.LinkedHashMap", {[{?}:UNKNOWN]
[{?}:UNKNOWN]=org.jgrapht.graph.IntrusiveEdge@6ce4ece3, [{2}:0:1]
Typically, if you start reading from the bottom of these entries upward, the first familiar class you see will be the object that's being unexpectedly serialized and causing you issues. In this case, the error was with Scalding's =MapsideReduce= class.
Once you know which object is causing the problem, try one of the following remedies:
-
Put the object in a lazy val
-
Move it into a companion object, which will not be serialized.
-
If the item is only needed at submission, but not on the Mappers/Reducers, make it
@transient
.
If you see a common case we overlooked, let us know. Some common issues are inner classes to the Job (don't do that), Logger objects (don't put those in the job, put them in a companion), and some mutable Guava objects have given us trouble (we'd love to see this ticket closed: https://github.com/twitter/chill/issues/66 )
The problem was in how I was defining my tests. For Scalding, your Specs2 tests must look like this:
"A job which trys to do blah" should {
<<RUN JOB>>
"successfully do blah" in {
expected.blah must_== actual.blah
}
}
My problem was that my tests looked like this:
"A job which trys to do blah" should {
"successfully do blah" in {
<<RUN JOB>>
expected.blah must_== actual.blah
}
}
In other words, running the job was inside the in {}
. For some reason, this was leading to multiple jobs running at the same time and conflicting with each others' output.
If anyone is interested, the diff which fixed my tests is here:
https://github.com/snowplow/snowplow/commit/792ed2f9082b871ecedcf36956427a2f0935588c
See the Scalding HBase page.
Q) What version of SBT do I need? (It'd be great to capture the actual error that happens when you use the wrong version)
A) Get SBT 0.12.2. If you're having an older version of SBT, you can update it by typing in command line:
brew update; brew unlink sbt; brew install sbt
Q) What happens if I get OutOfMemoryErrors when running "sbt assembly"?
A) Create ~/.sbtconfig with these options: SBT_OPTS="-XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:PermSize=256M -XX:MaxPermSize=512M"
Q) What should I do if I get "value compare is not a member of object Integer" when running "./sbt compile"?
A) You're probably using Java 6 instead of Java 7. You can specify which version of Java SBT should use by passing it the -java-home
option. For example, on a Mac you're SBT command might look something like:
./sbt -java-home /Library/Java/JavaVirtualMachines/<insert folder name of desired JVM version>/Contents/Home/
Yes! By requesting a pull, you are agreeing to license your code under the same license as Scalding.
- Scaladocs
- Getting Started
- Type-safe API Reference
- SQL to Scalding
- Building Bigger Platforms With Scalding
- Scalding Sources
- Scalding-Commons
- Rosetta Code
- Fields-based API Reference (deprecated)
- Scalding: Powerful & Concise MapReduce Programming
- Scalding lecture for UC Berkeley's Analyzing Big Data with Twitter class
- Scalding REPL with Eclipse Scala Worksheets
- Scalding with CDH3U2 in a Maven project
- Running your Scalding jobs in Eclipse
- Running your Scalding jobs in IDEA intellij
- Running Scalding jobs on EMR
- Running Scalding with HBase support: Scalding HBase wiki
- Using the distributed cache
- Unit Testing Scalding Jobs
- TDD for Scalding
- Using counters
- Scalding for the impatient
- Movie Recommendations and more in MapReduce and Scalding
- Generating Recommendations with MapReduce and Scalding
- Poker collusion detection with Mahout and Scalding
- Portfolio Management in Scalding
- Find the Fastest Growing County in US, 1969-2011, using Scalding
- Mod-4 matrix arithmetic with Scalding and Algebird
- Dean Wampler's Scalding Workshop
- Typesafe's Activator for Scalding