Posts

Showing posts from April, 2017

TensorFlow: saving/restoring and mixing multiple models

Image
Via: https://blog.metaflow.fr/tensorflow-saving-restoring-and-mixing-multiple-models-c4c94d5d7125 TensorFlow: saving/restoring and mixing multiple models Before going any further, make sure you read the very small primer i made on Tensorflow here Why start with those informations? Because, it is of tremendous importance to understand what can be saved at the different level of your code to avoid messing around cluelessly… How to actually save and load something The saver object Any interactions with your filesystem to have persistent data through different sessions can be handled with the Saver object. The constructor allows you to control 3 things: The target: This is used in case of a distributed architecture to handle computation. You can specify which TF server or ‘target’ you want to compute on.The graph: the graph you want the Session to handle. The tricky things here for beginners, is the fact that there is always a default Graph in TF where all operations are set by default, so…

TensorFlow, Save and Load a model in a serious way, from different files

via: https://kevincodeidea.wordpress.com/2016/08/02/tensorflow-save-and-load-a-model-in-a-serious-way-from-different-files/

It has been a long time since my last post. Recently I am working in a group developing a deep, online, traceable, better-than-current-method neural network. After carefully comparing theano and tensorflow, we decide to use the latter. The main reason is actually not technical, we simply “predict” tensorflow will have a bright future and will be better maintained. Back to the topic. Since it is an online algorithm, one important requirement is that one has to be able to save the model (not just some script-like operations, but also meta data, trained weights and the whole structure) to the disk and should be able to load the whole thing without a problem. The way I construct a model can be simplified as this: A basic model class contains all the tensorflow variables, in this application, the weights for each layerSeveral training functions that construct a graph …

Difference between tf.placeholder and tf.Variable

Via: http://stackoverflow.com/questions/36693740/whats-the-difference-between-tf-placeholder-and-tf-variable

In short, you use tf.Variable for trainable variables such as weights (W) and biases (B) for your model. weights = tf.Variable( tf.truncated_normal([IMAGE_PIXELS, hidden1_units], stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))), name='weights') biases = tf.Variable(tf.zeros([hidden1_units]), name='biases') tf.placeholder is used to feed actual training examples. images_placeholder = tf.placeholder(tf.float32, shape=(batch_size, IMAGE_PIXELS)) labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size)) This is how you feed the training examples during the training: for step in xrange(FLAGS.max_steps): feed_dict = { images_placeholder: images_feed, labels_placeholder: labels_feed, } _, loss_value = sess.run([train_op, loss], feed_dict=feed_dict) Your tf.variables will be trained (modified) as the result of this…