Minimizing Loss with Optimization With Tensorflow | Machine Learning Recipe #11 | all in one code
Автор: AIOC all in one code
Загружено: 2020-05-22
Просмотров: 323
link: http://allinonecode.pythonanywhere.com/
tensorflow full series : https://www.youtube.com/watch?v=BOgh6...
numpy full series : https://www.youtube.com/watch?v=HoGGD...
sk-learn full series : https://www.youtube.com/watch?v=CX7V9...
python full series : https://www.youtube.com/watch?v=7PN0O...
After you’ve formed an expression for the loss, the next step is to minimize the
loss by updating the model’s variables. This process is called optimization, and
TensorFlow supports a variety of algorithms for this purpose. Choosing the right
algorithm is critically important when coding machine learning applications.
Each optimization method is represented by a class in the tf.train package. Four
popular optimization classes are the GradientDescentOptimizer, Momentum
Optimizer, AdagradOptimizer, and AdamOptimizer classes. The following
sections look at each of these classes, starting with the Optimizer class, which is
the base class of TensorFlow’s optimization classes.
The Optimizer class
You can’t directly access the Optimizer class in code; applications need to instantiate one of its subclasses instead. But the Optimizer class is crucial because it
defines the all-important minimize method:
minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_
method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)
The only required argument is the first, which identifies the loss. By default,
minimize can access every trainable variable in the graph. An application can
select specific variables for optimization by setting the var_list argument.
minimize returns an operation that can be executed by a session’s run method.
Each execution performs two steps:
1. Compute values that update the variables of interest.
2. Update the variables of interest with the values computed in Step 1.
Just as you probably won’t win 20 Questions with your first question, you probably won’t optimize your model with a single call to minimize. Most applications
perform optimization in a loop, and the following code gives an idea what an
optimization loop looks like:
Create the optimizer and obtain the operation
optimizer = tf.train.GradientDescentOptimizer(learn_rate)
optimizer_op = minimize(loss)
Execute the minimization operation in a session
with tf.Session() as sess:
for step in range(num_steps):
sess.run(optimizer_op)

Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: