Python data structures and TensorFlow

In our previous section we have seen how we can work with python data structures. Now we will do the same with TensorFlow.

Before we start let’s recapitulate.

TensorFlow is a machine learning library. As a result, when we import the TensorFlow as “tf”, it acts as a TensorFlow object.

In the previous section we have also learned what an object is. Moreover, we have also learned, behind an object there are one or more classes. As there are one or more classes, there should be a lot of attributes and methods.

As a result, with the TensorFlow object we can access many attributes and methods.

Keeping this in mind, we should check the TensorFlow version first.

# Import TensorFlow
import tensorflow as tf
print(tf.__version__) # find the version number (should be 2.x+)

# output
2.8.2

Now we will define a variable named “scalar” and assign a single number.

# rank is dimension
scalar = tf.constant(7)
scalar
# output
<tf.Tensor: shape=(), dtype=int32, numpy=7>

# Checking dimensions
scalar.ndim
# output
0

A scalar has no dimension. In TensorFlow rank is dimension. Therefore, the rank of a scalar is 0.

However, a vector has one dimension. Because it has a magnitude or size and a direction.

A good example is Python list.

In TensorFlow we can follow the same example that we have seen in the previous section.

vector = tf.constant([4, 7])
vector
# output
<tf.Tensor: shape=(2,), dtype=int32, numpy=array([4, 7], dtype=int32)>

# Check the number of dimensions
vector.ndim
# output
1

Now we can have another python data structure. A list inside another list which we call a matrix.

# Create a matrix (more than 1 dimension)
matrix = tf.constant(
    [[4, 7], 
    [5, 6]
    ]
)
matrix
# output
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[4, 7],
       [5, 6]], dtype=int32)>

print(matrix)
# output
tf.Tensor(
[[4 7]
 [5 6]], shape=(2, 2), dtype=int32)

matrix.ndim
# output
2

If we compare this code with the previous section, we can find one difference.

The TensorFlow output gives us more information about the data structures. Right? 

We also know the shape, and data type of the TensorFlow object.

Therefore, we can experiment with a tensor now. 

We know that a tensor refers to “n” dimensions. That means it may take any number of dimensions.

However, in our case, we keep this data structure simple and make it three dimensional. 

tensor = tf.constant(
    [
          
           [
            [4, 7], [5, 6]
           ], 
           [
            [4, 7], [5, 6]
           ], 
           [
            [4, 7], [5, 6]
           ]
          
]
)
tensor
# output
<tf.Tensor: shape=(3, 2, 2), dtype=int32, numpy=
array([[[4, 7],
        [5, 6]],

       [[4, 7],
        [5, 6]],

       [[4, 7],
        [5, 6]]], dtype=int32)>


tensor.ndim
# output
3

What Next?

Books at Leanpub

Books in Apress

My books at Amazon

GitHub repository

TensorFlow, Machine Learning, AI and Data Science

Flutter, Dart and Algorithm

Twitter

Comments

One response to “Python data structures and TensorFlow”

  1. […] In our previous section we’ve seen tensor constants. Because we’ve created tensors with the tensor constant method. Now the same way, we can create the tensor variable method. […]

Leave a Reply