9. Computational Performance¶
In deep learning, datasets are usually large and model computation is complex. Therefore, we are always very concerned about computing performance. This chapter will focus on the important factors that affect computing performance: imperative programming, symbolic programming, asynchronous programing, automatic parallel computation, and multi-GPU computation. By studying this chapter, you should be able to further improve the computing performance of the models that have been implemented in the previous chapters, for example, by reducing the model training time without affecting the accuracy of the model.
- 9.1. Automatic Parallelism
- 9.2. Hardware
- 9.3. Concise Implementation for Multiple GPUs
- 9.4. Parameter Servers