Machine learning models are not deterministic: if you train a neural network twice using the same data and the same hyperparameters, you won’t get the same model. The outputs of these two models on the same test data may look very similar, but they are not identical.
This difference is due to multiple reasons. A very common one is related to the way the models are trained in the first place. In fact, optimizers, or in layman’s terms, the algorithms behind the minimization of the cost function (i.e. the training of the model, basically) introduce randomness when they compute gradients before updating the weight values: this randomness comes from the sampling of a train set.
🔴 Even though randomness is an intrinsic part of neural networks and many other machine learning models, it’s an undesirable quality in some contexts, especially when reproducibility matters.