#6 Parallelized Training of Deep NN - Comparison of Current Concepts and Frameworks


More

  • None

Accepted

[PDF] Final version (728kB) Oct 25, 2018, 3:04:22 PM UTC · ea10a936c160aea6d82741f4defa87483d2294d723db90778208ff6ccf22ee6bea10a936

[PDF] Submission version

Horizontal scalability is a major facilitator of recent advances in deep learning. Common deep learning frameworks offer different approaches to scale the training process. We operationalize the execution of distributed training using Kubernetes and helm templates. This way we lay the ground for a systematic comparison of deep learning frameworks. For two of them, TensorFlow and MXNet we examine their properties with regard to throughput, scalability and practical ease of use.

S. Jäger, S. Igel, C. Zirpins, H. Zorn

Contacts

To edit this submission, sign in using your email and password.