Skip to content
/ Mako Public

PLDI 2022 Mako: A Low-Pause, High-Throughput Evacuating Collector for Memory-Disaggregated Datacenters

Notifications You must be signed in to change notification settings

UCLA-SEAL/Mako

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 

Repository files navigation

Mako

Mako: A Low-Pause, High-Throughput Evacuating Collector for Memory-Disaggregated Datacenters (PLDI 2022)

Summary of Semeru

Mako is a low-pause, high-throughput garbage collector designed for memory-disaggregated datacenters. Key to Mako’s success is its ability to offload both tracing and evacuation onto memory servers and run these tasks concurrently when the CPU server executes mutator threads. Mako achieves ~12ms at the 90th-percentile pause time and outperforms Shenandoah by an average of 3 times in throughput.

Team

This project is done in collaboration with Professor Harry Xu's group at UCLA. Please visit Mako project at UCLA Systems. If you encounter any problems, please open an issue or feel free to contact us:

Haoran Ma: PhD student, haoranma@ucla.edu.

Shi Liu: PhD student, shiliu@g.ucla.edu ;

How to cite

Please refer to our PLDI'22 paper, Mako: A Low-Pause, High-Throughput Evacuating Collector for Memory-Disaggregated Datacenters for more details.

Bibtex

@inproceedings{ma2022mako,
  title={Mako: a low-pause, high-throughput evacuating collector for memory-disaggregated datacenters},
  author={Ma, Haoran and Liu, Shi and Wang, Chenxi and Qiao, Yifan and Bond, Michael D and Blackburn, Stephen M and Kim, Miryung and Xu, Guoqing Harry},
  booktitle={Proceedings of the 43rd ACM SIGPLAN International Conference on Programming Language Design and Implementation},
  pages={92--107},
  year={2022}
}

About

PLDI 2022 Mako: A Low-Pause, High-Throughput Evacuating Collector for Memory-Disaggregated Datacenters

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published