Simplest way to do parallel replicate

I am fond of the parallel package in R and how easy and intuitive it is to do parallel versions of apply, sapply, etc.

Is there a similar parallel function for replicate?


You can just use the parallel versions of lapply or sapply, instead of saying to replicate this expression n times you do the apply on 1:n and instead of giving an expression, you wrap that expression in a function that ignores the argument sent to it.

possibly something like:

#create cluster
cl <- makeCluster(detectCores()-1)  
# get library support needed to run the code
# put objects in place that might be needed for the code
myData <- data.frame(x=1:10, y=rnorm(10))
# Set a different seed on each member of the cluster (just in case)
#... then parallel replicate...
parSapply(cl, 1:10000, function(i,...) { x <- rnorm(10); mean(x)/sd(x) } )
#stop the cluster

as the parallel equivalent of:

replicate(10000, {x <- rnorm(10); mean(x)/sd(x) } )

Using clusterEvalQ as a model, I think I would implement a parallel replicate as:

parReplicate <- function(cl, n, expr, simplify=TRUE, USE.NAMES=TRUE)
  parSapply(cl, integer(n), function(i, ex) eval(ex, envir=.GlobalEnv),
            substitute(expr), simplify=simplify, USE.NAMES=USE.NAMES)

The arguments simplify and USE.NAMES are compatible with sapply rather than replicate, but they make it a better wrapper around parSapply in my opinion.

Here's an example derived from the replicate man page:

cl <- makePSOCKcluster(3)
hist(parReplicate(cl, 100, mean(rexp(10))))

This is the best I could come up with:

cl <- makeCluster(getOption("cl.cores", 4))
clusterCall(cl, replicate(50, simulate_fxns() ))

Need Your Help

Install ffmpeg on elastic beanstalk using ebextensions config

amazon-web-services ffmpeg elastic-beanstalk

I'm attempting to install an up to date version of ffmpeg on an elastic beanstalk instance on amazon servers. I've created my config file and added these container_commands: