Spark on EMR : Time for running data in EMR didn't reduce when no of nodes increases

My Spark program take a large amount of zip files that contain JSON data from S3. It performs some cleaning on the data in the form of spark transforms. After that, I saved it as parquet files. When I run my program with 1GB data in 10 nodes 8GB configurations in AWS it takes about 11 min. I changed it to 20 nodes 32GB configuration. Still it takes about 10 min. Reduced only around 1 min. Why this kind of behavior?

Answers


Because adding more machines isn't always the solution, adding more machine leads to unnecessary data transfer over the network which can be the bottleneck in most cases.

Also 1GB of data isn't that big to perform scalability and performance benchmarking.


Need Your Help

c++ reduce library size by excluding unnecessary functions programmatically?

c++ optimization libraries minimize

What is the easier way to reduce library size by selecting only needed functions and eliminating unnecessary files ?

Xtext Code Generation Value of an Entity and not the object

xtext xtend

I am trying to generate code from my grammar. I would like to know how to get the final value e not the object. For example, I have: