You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
been running a slightly modified version of the code on my (admittedly slow) Macbook Air 2013.
Now I am wondering: is it normal for the declaration of the training ops (tf.train.AdamOptimizer, tf.gradients, tf.clip_by_global_norm, tf.train.AdamOptimizerapply_gradients) to take a combined 11 minutes (or anything in that order of magnitude)? Been downsizing the layer_size to 16 as well, same effect. This effects the development workflow.
Would be thankful for any hints, because testing other parts of the code with this taking so long is very time-consuming.
Best,
Anthony
EDIT:
Could this be caused by the Mac needing to allocate virtual memory space? Because I only have 4gb and the model consumes more than that in my current setup.
The text was updated successfully, but these errors were encountered:
Hi guys,
been running a slightly modified version of the code on my (admittedly slow) Macbook Air 2013.
Now I am wondering: is it normal for the declaration of the training ops (tf.train.AdamOptimizer, tf.gradients, tf.clip_by_global_norm, tf.train.AdamOptimizerapply_gradients) to take a combined 11 minutes (or anything in that order of magnitude)? Been downsizing the layer_size to 16 as well, same effect. This effects the development workflow.
Would be thankful for any hints, because testing other parts of the code with this taking so long is very time-consuming.
Best,
Anthony
EDIT:
Could this be caused by the Mac needing to allocate virtual memory space? Because I only have 4gb and the model consumes more than that in my current setup.
The text was updated successfully, but these errors were encountered: