Skip to content

lljustycell999/CSC375Assignment2.github.io

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 

Repository files navigation

CSC 375 Assignment 2 Benchmarking Results - Justyce Countryman

About this page

This page will discuss the benchmarking results of running a concurrent program using the Java Microbenchmark Harness (JMH) through two different means, a non-concurrent data structure with a custom-made locking scheme and a concurrent data structure that comes directly from the Java library.

Context of Programming Project

The purpose of my project is to generate a random semester schedule of six courses based on a .txt file that (hopefully) contains all SUNY Oswego courses. The courses will be held in either a HashMap with a ReadWriteLock or a ConcurrentHashMap. Each benchmark will use 32 threads and each of those threads will have the same probability of either reading from the data structure six times to create a schedule of six random classes or writing a single new course to the data structure. If the HashMap with the ReadWriteLock method is used, threads will be controlled by acquiring and releasing readLock and writeLock, with data consumption or production occurring in between. If the ConcurrentHashMap is chosen, the reading or writing will happen within a synchronized function instead, which still ensures thread safety.

What are the Different Loads?

For each platform, there will be four charts that represent the benchmarking results of four different loads. The loads are simply four different fixed probabilities that the threads have of reading or writing. The platforms are run from my 2020 13" M1 MacBook Pro with 16GB of RAM and 8 cores (4 performance and 4 efficiency). The platforms include using the Rho server, using the Gee server, and running the program straight from my computer.

Note: Each benchmark uses the default number of forks, warmup iterations, and measurement iterations, which is 5 of each.

Load 1: ≈ 99% Reads

CSC375Load1

Load 2: ≈ 90% Reads

CSC375Load2

Load 3: ≈ 75% Reads

CSC375Load3

Load 4: ≈ 50% Reads

CSC375Load4

Conclusion

The recorded data clearly suggests that the ConcurrentHashMap from the Java library is much more likely to perform more operations in a given timeframe than the HashMap with the custom-made ReadWriteLock. In Rho, each test had the ConcurrentHashMap scoring about 10 to 13 times higher than the HashMap with the ReadWriteLock. In Gee, the ConcurrentHashMap still reigned supreme but only scored about 4.5 times higher. Finally, my MacBook Pro gave results that favored the ConcurrentHashMap by about 39 to 47 percent for the first three loads, yet surprisingly the 50/50 split between reads and writes resulted in the ReadWriteLock method winning with about a 13 percent increase in operations per microsecond. It would make sense to take the MacBook Pro benchmarks with a grain of salt anyway due to the potential lack of resources compared to the Rho and Gee servers. Even so, the small performance increase of the ReadWriteLock does not at all balance out the significantly higher speeds of the ConcurrentHashMap throughout the other 11 benchmarks. In the case of making a fast and efficient parallel HashMap, programmers should often consider Java's ConcurrentHashMap instead of using a sequential HashMap with a custom-made locking scheme, even if they have to miss out on an important topic in parallel computing.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published