Run this notebook online:\ |Binder| or Colab: |Colab| .. |Binder| image:: https://mybinder.org/badge_logo.svg :target: https://mybinder.org/v2/gh/deepjavalibrary/d2l-java/master?filepath=chapter_deep-learning-computation/read-write.ipynb .. |Colab| image:: https://colab.research.google.com/assets/colab-badge.svg :target: https://colab.research.google.com/github/deepjavalibrary/d2l-java/blob/colab/chapter_deep-learning-computation/read-write.ipynb File I/O ======== So far we discussed how to process data and how to build, train, and test deep learning models. However, at some point, we will hopefully be happy enough with the learned models that we will want to save the results for later use in various contexts (perhaps even to make predictions in deployment). Additionally, when running a long training process, the best practice is to periodically save intermediate results (checkpointing) to ensure that we do not lose several days worth of computation if we trip over the power cord of our server. Thus it is time we learned how to load and store both individual weight vectors and entire models. This section addresses both issues. Loading and Saving Tensors -------------------------- For individual tensors, we can convert NDArrays into ``byte[]``\ s by calling their ``encode()`` function. We can then convert them back into NDArrays by calling the NDArray ``decode()`` function and passing in an ``NDManager``\ (to manage the created NDArray) and ``byte[]`` (the wanted tensor). We can then use ``FileInputStream`` and ``FileOutputStream`` to read and write these to files respectively. .. code:: java %load ../utils/djl-imports .. code:: java NDManager manager = NDManager.newBaseManager(); NDArray x = manager.arange(4); try (FileOutputStream fos = new FileOutputStream("x-file")) { fos.write(x.encode()); } x .. parsed-literal:: :class: output ND: (4) gpu(0) int32 [ 0, 1, 2, 3] We can now read this data from the stored file back into memory. .. code:: java NDArray x2; try (FileInputStream fis = new FileInputStream("x-file")) { // We use the `Utils` method `toByteArray()` to read // from a `FileInputStream` and return it as a `byte[]`. x2 = NDArray.decode(manager, Utils.toByteArray(fis)); } x2 .. parsed-literal:: :class: output ND: (4) gpu(0) int32 [ 0, 1, 2, 3] We can also store ``NDList`` into a file and load it back: .. code:: java NDList list = new NDList(x, x2); try (FileOutputStream fos = new FileOutputStream("x-file")) { fos.write(list.encode()); } try (FileInputStream fis = new FileInputStream("x-file")) { list = NDList.decode(manager, Utils.toByteArray(fis)); } list .. parsed-literal:: :class: output NDList size: 2 0 : (4) int32 1 : (4) int32 Model Parameters ---------------- Saving individual weight vectors (or other tensors) is useful, but it gets very tedious if we want to save (and later load) an entire model. After all, we might have hundreds of parameter groups sprinkled throughout. For this reason the framework provides built-in functionality to load and save entire networks. An important detail to note is that this saves model *parameters* and not the entire model. For example, if we have a 3-layer MLP, we need to specify the *architecture* separately. The reason for this is that the models themselves can contain arbitrary code, hence they cannot be serialized as naturally. Thus, in order to reinstate a model, we need to generate the architecture in code and then load the parameters from disk. Let us start with our familiar MLP. .. code:: java public SequentialBlock createMLP() { SequentialBlock mlp = new SequentialBlock(); mlp.add(Linear.builder().setUnits(256).build()); mlp.add(Activation.reluBlock()); mlp.add(Linear.builder().setUnits(10).build()); return mlp; } SequentialBlock original = createMLP(); NDArray x = manager.randomUniform(0, 1, new Shape(2, 5)); original.initialize(manager, DataType.FLOAT32, x.getShape()); ParameterStore ps = new ParameterStore(manager, false); NDArray y = original.forward(ps, new NDList(x), false).singletonOrThrow(); y .. parsed-literal:: :class: output ND: (2, 10) gpu(0) float32 [[-1.0524, 0.4173, 0.9115, -0.9705, -1.5318, -0.6548, 0.0033, -0.6443, -0.5181, -0.0511], [-0.4358, 0.0545, 0.5989, -0.5398, -1.0074, -0.3614, -0.1044, -0.4891, -0.2387, 0.1797], ] Next, we store the parameters of the model as a file with the name ``mlp.param``. .. code:: java // Save file File mlpParamFile = new File("mlp.param"); DataOutputStream os = new DataOutputStream(Files.newOutputStream(mlpParamFile.toPath())); original.saveParameters(os); To recover the model, we instantiate a clone of the original MLP model. Instead of randomly initializing the model parameters, we read the parameters stored in the file directly. .. code:: java // Create duplicate of network architecture SequentialBlock clone = createMLP(); // Load Parameters clone.loadParameters(manager, new DataInputStream(Files.newInputStream(mlpParamFile.toPath()))); Now let us directly compare the parameters of both models. We get the ``Parameter``'s respective array at each index for both ``PairList``\ s and then compare the two. Note that we cannot compare the ``Parameter``'s directly. When we load the ``Parameter``, a new unique id is generated for it. Instead, we can check that the ``NDArray``\ s are equal. They should be identical if loaded properly. .. code:: java // Original model's parameters PairList originalParams = original.getParameters(); // Loaded model's parameters PairList loadedParams = clone.getParameters(); for (int i = 0; i < originalParams.size(); i++) { if (originalParams.valueAt(i).getArray().equals(loadedParams.valueAt(i).getArray())) { System.out.printf("True "); } else { System.out.printf("False "); } } .. parsed-literal:: :class: output True True True True Since both instances have the same model parameters, the computation result of the same input ``x`` should be the same. Let us verify this. .. code:: java NDArray yClone = clone.forward(ps, new NDList(x), false).singletonOrThrow(); y.equals(yClone); .. parsed-literal:: :class: output true Summary ------- - The ``decode`` and ``encode`` functions along with ``FileStreams`` can be used to perform File I/O for tensor objects. - Saving the architecture has to be done in code rather than in parameters. Exercises --------- 1. Even if there is no need to deploy trained models to a different device, what are the practical benefits of storing model parameters? 2. Assume that we want to reuse only parts of a network to be incorporated into a network of a *different* architecture. How would you go about using, say the first two layers from a previous network in a new network. 3. How would you go about saving network architecture and parameters? What restrictions would you impose on the architecture?