doi:10.3850/978-981-08-7304-2_1393


Performance Engineering, Testing and System Sizing of an Automatic Code Evaluation Framework


Mohit Nanda and Amol Khanapurkar

Performance Engineering Research Center, Tata Consultancy Services Ltd., Mumbai, India.

ABSTRACT

Many organizations use “Code Jams” to promote competitive programming and recognize and reward the best technical talents. We customized an framework Mooshak, which automates submission, compilation and evaluation of code submissions, for driving one such initiative in our organisation. This system is used in the online programming competition to automate code evaluation of solutions. The code submission spikes in last 15-minutes of the contests, led to instability and degraded performance along with loss of reliability in the system.

It was clear that system had performance problems due to congestion in the last 15 minutes of the contest. No literature or experience papers were available for reference to design a good solution for such framework.

On this background our approach of limiting concurrency to avoid congestion was only a rationale decision which was then systematically justified by breaking the problem into pieces like Compiler Calibration, Concurrency Calibration and load testing.

A systematic approach of performance and load testing for hundreds of concurrent compilations led us to characterizing compilers and changing architecture. We achieved 50x improvements in response times at 40% lower utilization, leading to higher acceptability. We would have had to incur 3x software and hardware costs if the solution didn’t have acceptability. Not only did we avoid extra cost but also created headroom for additional workload.



     Back to TOC

FULL TEXT(PDF)