Tensorflow user manual

Import Tensorflow framework

After installing the tensorflow library, use the following command to import

import tensorflow as tf # tf as alias

Tensorflow programming process

Define tensorflow constant

C = tf.constant(N,name='C') # N is the value of a constant

Defining tensorflow variables / tensors

f = tf.Variable(f(x,y),name='f') # It can be used to define variables or simple functions

initialize variable

Because tf Variables of type variable are not initialized when they are created. Therefore, when tf Variable, tf Get_ Variables must be initialized globally.

init = tf.global_variables_initializer() 
# This is just to define a statement init, which is not executed

Execute statement

The statements defined by tensorflow are static, and the execution of statements depends on tf Session

with tf.Session() as session:
	session.run(init)

Tensorflow features

Design / construction features

[In]

a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)

[Out]

Tensor("Mul:0", shape=(), dtype=int32)

In the above code, the printed value of c does not output the product of a and b, but the attribute value of tensor c: shape is empty, and the data type is int32 This shows that the code is only a design process, not actually implemented. The execution code is as follows:

[In]

sess = tf.Session()
print(sess.run(c))

[Out]

20

It can be seen that the basic process used by tensorflow is:

Create variables and operations, initialize variables, create sessions, and use sessions to perform all operations.

placeholder

  • Placeholders only occupy the position. The contents in the position are empty. You can fill different things according to your needs. Just like the newly built building, the empty rooms inside are placeholders. We can put the necessary things in the empty rooms.
  • tensorflow using feed_dict syntax to fill placeholders
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()

In the above code, a placeholder named x is created, and the type is tf Int64, and then use sess When run executes, use the feed_dict fills 3 in X, so the final output is 2*3=6

Calculation diagram

When we create variables and operations, we only build a calculation diagram in tensorflow, which can have placeholders. These are just design, there are no actual values and they have not been implemented. Until the session is created, you can use session Run to execute the previously designed calculation diagram. When executing the calculation diagram, you can fill the placeholder in the calculation diagram with content. For the same calculation chart, different values can be filled into the placeholder each time it is run, that is, each time it is executed. Just like a building, you can stack books in the room, or you can move books out and stack computers inside.

Two ways to create a session

Method 1

sess = tf.Session()
result = sess.run(..., feed_dict = {...})
sess.close()

Method II

with tf.Session() as sess: 
    result = sess.run(..., feed_dict = {...})

Use Tensorflow to build common functions

linear function

The following will implement the famous linear function in the field of artificial intelligence: Y=WX+b, in which the matrix addition operation tf Add (x, y), and matrix multiplication tf Matmul (x, y).

def linear_function():   
    
    np.random.seed(1)
 
    X = tf.constant(np.random.randn(3, 1), name = "X") 
    # Define a constant whose dimension is (3,1), and the randn function will generate a random number
    W = tf.constant(np.random.randn(4, 3), name = "W")
    b = tf.constant(np.random.randn(4, 1), name = "b")
    Y = tf.add(tf.matmul(W, X), b)# Tf The matmul function performs matrix operations
    
    # Create a session, and then use run to perform the operations defined above
    sess = tf.Session()
    result = sess.run(Y)
    sess.close()

    return result

Nonlinear function

The following will implement the famous nonlinear function sigmoid. In fact, the tensorflow framework has helped us implement these functions. We just need to learn to use them. Next, we will use placeholder to use the sigmoid function in tensorflow.

def sigmoid(z):
    x = tf.placeholder(tf.float32, name="x") 
    # Define a placeholder of type float32
    
    sigmoid = tf.sigmoid(x) 
    # Call the tensorflow sigmoid function and pass the placeholder as a parameter
    
    with tf.Session() as sess: # Create a session
        # Use run to perform the sigmoid operation defined above.
        # When executing, fill the z passed in from the outside into the placeholder x, which is equivalent to passing z as a parameter into the tensorflow sigmoid function.
        result = sess.run(sigmoid, feed_dict = {x: z}) 
    
    return result

Cost function

Cost function is also an important part of artificial intelligence. Like sigmoid, tensorflow has already defined various famous cost functions for us.

We can use the tensorflow function to implement sigmoid and the above cost function at one time. The above cost function is also called cross_entropy function:

tf.nn.sigmoid_cross_entropy_with_logits(logits = ...,  labels = ...)
  • logits parameter: z output from the last layer of neurons
  • labels parameter: real label y
  • The above code transfers the output z of the last layer of neurons into the sigmoid activation function at one time, and then calculates the cross_entropy loss function
def cost(z_in, y_in):    
    
    z = tf.placeholder(tf.float32, name="z") # Create placeholder
    y = tf.placeholder(tf.float32, name="y")
    
    # Use sigmoid_ Cross_ Entry_ With_ Logits to build the cost operation.
    cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y)
    
    # Create session
    sess = tf.Session()
    
    # The incoming z_in and y_in to the placeholder, and then perform the cost operation
    cost = sess.run(cost, feed_dict={z: z_in, y: y_in})

    sess.close()
    
    return cost

One Hot encoding

  • Application scenario: multi classification problem
  • Code conversion principle legend:

  • The vector on the right is called one hot vector, because only one element in the vector is 1 and the others are 0. For example, if the last element is 1, it means type 3. Before we implemented pure python programming, if we want to implement the above conversion, we need to write several lines of code. If we use the tensorflow framework, we only need one line of code:

    tf.one_hot(indices, depth, axis)
    # indices indicates the real label, depth indicates the number of categories, and axis specifies the direction of the output vector
    
  • Example:

    def one_hot_matrix(labels, C_in):
        """
        labels Is the real label y Vector;
        C_in Is the number of categories
        """
      
        # Create a tensorflow constant named C and set its value to C_in
        C = tf.constant(C_in, name='C')
        
        # Use one_ The hot function builds the transformation operation, naming it one_hot_matrix.
        one_hot_matrix = tf.one_hot(indices=labels, depth=C, axis=0)
        
        sess = tf.Session()
        
        # Execute one_hot_matrix operation
        one_hot = sess.run(one_hot_matrix)
      
        sess.close()
        
        return one_hot
    

Initialize 0 and 1

  • Two tensorflow common functions, tf Ones() and tf Zeros (). When dimension information is passed into these two functions, they will return an array filled with 1 or 0.

    def ones(shape):
        
        # Transfer dimension information to tf In ones
        ones = tf.ones(shape)
        
        sess = tf.Session()
        
        # Execute ones operation
        ones = sess.run(ones)
        
        sess.close()
        
        return ones
    

Random initialization

  • Xavier built in with tensorflow_ Initializer function to perform random initialization of weight w
W1 = tf.get_variable("W1", [25, 12288], initializer = tf.contrib.layers.xavier_initializer(seed=1))
  • Use tensorflow built-in zeros_ The initializer function initializes the offset value b1 to 0
b1 = tf.get_variable("b1", [25, 1], initializer = tf.zeros_initializer())

Reset global default drawing

It is used to clear the default graphics stack and reset the global default graphics. It must be executed when using tf, otherwise the graphics stack will overflow.

tf.reset_default_graph()

tf transpose matrix operation

labels = tf.transpose(Y)

Calculate the mean value of the tensor

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))

Back propagation optimizer

  • Standard gradient descent back propagation
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)
  • Adam optimized back propagation
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

Constructing convolutional neural network (CNN) using Tensorflow

Related tool functions

  • Tf Nn Conv2d (x, W1, stripes = [1,s,s,1], padding = 'SAME'): X refers to the input matrix, W1 refers to the filter. This function will convolute X and W1. Stripes refers to the convolution step size of each dimension, [1,s,s,1] means [number of samples, height of input matrix, width of input matrix, depth of input matrix]. Padding refers to the filling quantity. By default, the SAME filling is used, that is, a certain number of elements are automatically filled to ensure that the size of the input matrix is the SAME as that of the output matrix.
  • Tf Nn Max_ Pool (a, ksize = [1, f, f, 1], stripes = [1, s, s, 1], padding = 'SAME'): this function maximizes the pool of input matrix A. The f in ksize represents the size of the pooled window. S in strings indicates the step size.
  • Tf Nn Relu (z1): relu activate each element in Z1
  • Tf Contrib Layers Flatten §: it flattens the matrix of the sample in P into a vector.
  • Tf Contrib Layers Fully_ Connected (F, num_outputs): build a fully connected layer. F is the input of the layer. Num_ Outputs indicates the number of neurons in this layer. This function will automatically initialize the weight w of this layer. We only initialized the parameters related to the volume layer, because TensorFlow will automatically help us initialize the parameters of the full link layer.

Tags: Python AI Deep Learning TensorFlow neural networks

Posted by derksj on Tue, 31 May 2022 15:08:20 +0530