Running 16,000 Microsoft Windows Applications On A Supercomputer In 5 Minutes

  • MIT researchers develop a model that simultaneously runs 16,384 Windows applications on Linux Supercomputers within 5 minutes. 
  • To do this, they used Lincoln Lab LLMapReduce technology along with Wine Windows compatibility layer.

Since the pace of Moore’s Law is reducing, it has become necessary to use parallel processing to increase the application performance. Neural networks, physical simulation and data analysis applications are evolving at a significant rate, and they utilize the power of parallel processing to reach their performance goal.

To run such data-intensive applications, you require several software based on certain operating systems, like Microsoft Windows, which has a long experience of implementing parallel computing.

However, the world’s top 500 supercomputers are running on Linux, and they are capable of executing interactive applications on thousands of cores in seconds. Usually, virtual machines (VMs) are used to run Windows programs on Linus computers, which imposes a lot of overhead on the applications.

Executing multiple VMs on supercomputer could take several seconds (sometimes minutes) per virtual machine. Scaling them to thousands of cores in existing supercomputer certainly raises efficiency and performance problems, making it difficult to run tons of Windows application on a supercomputer simultaneously.

Now, a team of researchers at MIT has come up with a new technique that quickly launches and executes Windows application on thousands of processor on a modern supercomputer. In particular, they have demonstrated the launch of 16,000 Windows applications within 5 minutes (each application is handled by one core).

How Does It Work?

To rapidly launch Windows applications on Linux supercomputer, researchers used Lincoln Lab LLMapReduce (multi-level map-reduce) technology along with Wine Windows compatibility layer. For high-performance computing, multilevel scheduling slightly changes the analysis code to process numerous datasets with a single job launch.

MIT SuperCloud software stack comes with an easy-to-use interface that gives access to LLMapReduce to efficiently run thousands of tasks onto a cluster, decreasing complex parallel scheduling, dependency resolution, and task submission jobs to one line of code, while concurrently increasing the task performance by minimizing the latency of each task.

Since LLMapReduce isn’t based on any specific language, it works with any executable, which makes it ideal to launch numerous Wine instances simultaneously.

Components of the SLURM scheduler | Courtesy of resesarchers

They used an open source job scheduler called Slurm Workload Manager to quickly identify resources, allocate them to tasks, schedule task execution on their allocated resources, launch them, monitor task while it’s running and perform epilog clean-up when the task is terminated.

Reference: arXiv:1808.04345


Launch times & launch rates of Windows instances

Researchers implemented their system on a supercomputer containing 648 compute nodes (each node has no less than 64 Xeon Phi processing cores) with a total of 41,472 cores. They executed a single Window instance on 1,2,4,8…256 nodes, followed by 2,4,8…64 instances on each of the 256 nodes, which gave them a total of 16,384 concurrent instances.

Read: Memory Processing Units Can Efficiently Implement AI Algorithms

All these instances took nearly 5 minutes to execute, enabling a wide range of executable Windows applications on supercomputers. The team plans to extend this capability to larger number of processors executing more diverse programs.

Written by
Varun Kumar

I am a professional technology and business research analyst with more than a decade of experience in the field. My main areas of expertise include software technologies, business strategies, competitive analysis, and staying up-to-date with market trends.

I hold a Master's degree in computer science from GGSIPU University. If you'd like to learn more about my latest projects and insights, please don't hesitate to reach out to me via email at [email protected].

View all articles
Leave a reply