Exploring SparkR
R-bloggers 2015-07-01
(This article was first published on Blag's bag of rants, and kindly contributed to R-bloggers)
A colleague from work, asked me to investigate about Spark and R. So the most obvious thing to was to investigate about SparkR -;)
I installed Scala, Hadoop, Spark and SparkR…not sure Hadoop is needed for this…but I wanted to have the full picture -:)
Anyway…I came across a piece of code that reads lines from a file and count how many lines have a “a” and how many lines have a “b”…
For this code I used the lyrics of Girls Not Grey by AFI…
SparkR.Rlibrary(SparkR) start.time <- Sys.time() sc <- sparkR.init(master="local") logFile <- "/home/blag/R_Codes/Girls_Not_Grey" logData <- SparkR:::textFile(sc, logFile) numAs <- count(SparkR:::filterRDD(logData, function(s) { grepl("a", s) })) numBs <- count(SparkR:::filterRDD(logData, function(s) { grepl("b", s) })) paste("Lines with a: ", numAs, ", Lines with b: ", numBs, sep="") end.time <- Sys.time() time.taken <- end.time - start.time time.taken
0.3167355 seconds…pretty fast…I wonder how regular R will behave?
PlainR.Rlibrary("stringr") start.time <- Sys.time() logFile <- "/home/blag/R_Codes/Girls_Not_Grey" logfile<-read.table(logFile,header = F, fill = T) logfile<-apply(logfile[,], 1, function(x) paste(x, collapse=" ")) df<-data.frame(lines=logfile) a<-sum(apply(df,1,function(x) grepl("a",x))) b<-sum(apply(df,1,function(x) grepl("b",x))) paste("Lines with a: ", a, ", Lines with b: ", b, sep="") end.time <- Sys.time() time.taken <- end.time - start.time time.taken
Nice…0.01522398 seconds…wait…what? Isn’t Spark supposed to be pretty fast? Well…I remembered that I read somewhere that Spark shines with big files…
Well…I prepared a file with 5 columns and 1 million records…let’s see how that goes…
SparkR.Rlibrary(SparkR) start.time <- Sys.time() sc <- sparkR.init(master="local") logFile <- "/home/blag/R_Codes/Doc_Header.csv" logData <- SparkR:::textFile(sc, logFile) numAs <- count(SparkR:::filterRDD(logData, function(s) { grepl("a", s) })) numBs <- count(SparkR:::filterRDD(logData, function(s) { grepl("b", s) })) paste("Lines with a: ", numAs, ", Lines with b: ", numBs, sep="") end.time <- Sys.time() time.taken <- end.time - start.time time.taken
26.45734 seconds for a million records? Nice job -:) Let’s see if plain R wins again…
PlainR.Rlibrary("stringr") start.time <- Sys.time() logFile <- "/home/blag/R_Codes/Doc_Header.csv" logfile<-read.csv(logFile,header = F) logfile<-apply(logfile[,], 1, function(x) paste(x, collapse=" ")) df<-data.frame(lines=logfile) a<-sum(apply(df,1,function(x) grepl("a",x))) b<-sum(apply(df,1,function(x) grepl("b",x))) paste("Lines with a: ", a, ", Lines with b: ", b, sep="") end.time <- Sys.time() time.taken <- end.time - start.time time.taken
48.31641 seconds? Look like Spark was almost twice as fast this time…and this is a pretty simple example…I’m sure that when complexity arises…the gap is even bigger…
And sure…I know that a lot of people can take my plain R code and make it even faster than Spark…but…this is my blog…not theirs -;)
I will come back as soon as I learn more about SparkR -:D
Greetings,
Blag.
Development Culture.
To leave a comment for the author, please follow the link and comment on his blog: Blag's bag of rants.
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...