Nothing Special   »   [go: up one dir, main page]

PyTorch - Basic Operations

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

9/4/23, 2:52 PM “PyTorch - Basic operations”

Jonathan Hui blog About

“PyTorch - Basic operations”


Feb 9, 2018

This tutorial helps NumPy or TensorFlow users to pick up PyTorch quickly.

Basic
By selecting different configuration options, the tool in the PyTorch site shows you the required
and the latest wheel for your host platform. For example, on a Mac platform, the pip3 command
generated by the tool is:

pip3 install http://download.pytorch.org/whl/torch-0.3.0.post4-cp36-cp36m-macosx_


pip3 install torchvision

Run the following code and you should see an un-initialized 2x3 Tensor is printed out. Tensor is
a data structure representing multi-dimensional array. It is similar to a NumPy ndarray. It’s size
is equivalent to the shape of the NumPy ndarray.

import torch

x = torch.Tensor(2, 3) # Create an un-initialized Tensor of size 2x3


print(x) # Print out the Tensor

# 0.0000e+00 -2.0000e+00 0.0000e+00


# -2.0000e+00 1.8856e+31 4.7414e+16
# [torch.FloatTensor of size 2x3]

This printout represents the Tensor type and its size (dimension: 2x3).

[torch.FloatTensor of size 2x3]

Sample programs:
https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 1/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

import torch

# Initialize

x = torch.Tensor(2, 3) # An un-initialized Tensor object. x holds garbage data.


y = torch.rand(2, 3) # Initialize with random values

# Operations

z1 = x + y
z2 = torch.add(x, y) # Same as above

print(z2) # [torch.FloatTensor of size 2x3]

Operations
The syntax on a tensor operation:

torch.is_tensor(obj)

In-place operation

All operations end with “_” is in place operations:

x.add_(y) # Same as x = x + y

out
We can assign the operation result to a variable. Alternatively, all operation methods have an
out parameter to store the result.

r1 = torch.Tensor(2, 3)
torch.add(x, y, out=r1)

It is the same as:

r2 = torch.add(x, y)

https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 2/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

Indexing

We can use the NumPy indexing in Tensors:

x[:, 1] # Can use numpy type indexing


x[:, 0] = 0 # For assignment

Conversion between NumPy ndarray and Tensor

During the conversion, both ndarray and Tensor share the same memory storage. Change
value from either side will affect the other.

# Conversion
a = np.array([1, 2, 3])
v = torch.from_numpy(a) # Convert a numpy array to a Tensor

b = v.numpy() # Tensor to numpy


b[1] = -1 # Numpy and Tensor share the same memory
assert(a[1] == b[1]) # Change Numpy will also change the Tensor

Tensor meta-data

Size of the Tensor and number of elements in Tensor:

### Basic Tensor operation

x.size() # torch.Size([2, 3])


torch.numel(x) # 6: number of elements in x

Reshape Tensor

Reshape a Tensor to different size:

### Tensor resizing


x = torch.randn(2, 3) # Size 2x3
y = x.view(6) # Resize x to size 6
z = x.view(-1, 2) # Size 3x2

https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 3/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

Create a Tensor

Creating and initializing a Tensor

### Create a Tensor

v = torch.Tensor(2, 3) # An un-initialized torch.FloatTensor of size 2x3


v = torch.Tensor([[1,2],[4,5]]) # A Tensor initialized with a specific array
v = torch.LongTensor([1,2,3]) # A Tensor of type Long

Create a random Tensor

To increase the reproducibility of result, we often set the random seed to a specific value first.

torch.manual_seed(1)

v = torch.rand(2, 3) # Initialize with random number (uniform distribu


v = torch.randn(2, 3) # With normal distribution (SD=1, mean=0)
v = torch.randperm(4) # Size 4. Random permutation of integers from 0 t

Tensor type

x = torch.randn(5, 3).type(torch.FloatTensor)

Identity matrices, Fill Tensor with 0, 1 or values

eye = torch.eye(3) # Create an identity 3x3 tensor

v = torch.ones(10) # A tensor of size 10 containing all ones


v = torch.ones(2, 1, 2, 1) # Size 2x1x2x1
v = torch.ones_like(eye) # A tensor with same shape as eye. Fill it with 1

v = torch.zeros(10) # A tensor of size 10 containing all zeros

# 1 1 1
# 2 2 2
https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 4/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

# 3 3 3
v = torch.ones(3, 3)
v[1].fill_(2)
v[2].fill_(3)

Initialize Tensor with a range of value

v = torch.arange(5) # similar to range(5) but creating a Tensor


v = torch.arange(0, 5, step=1) # Size 5. Similar to range(0, 5, 1)

# 0 1 2
# 3 4 5
# 6 7 8
v = torch.arange(9)
v = v.view(3, 3)

Initialize a linear or log scale Tensor

v = torch.linspace(1, 10, steps=10) # Create a Tensor with 10 linear points for (


v = torch.logspace(start=-10, end=10, steps=5) # Size 5: 1.0e-10 1.0e-05 1.0e+00,

Initialize a ByteTensor

c = torch.ByteTensor([0, 1, 1, 0])

Summary

Creation Ops
~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: eye
.. autofunction:: from_numpy
.. autofunction:: linspace
.. autofunction:: logspace
.. autofunction:: ones
.. autofunction:: ones_like
.. autofunction:: arange

https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 5/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

.. autofunction:: range
.. autofunction:: zeros
.. autofunction:: zeros_like

Indexing, Slicing, Joining, Mutating Ops


We will prepare a Matrix that will be used in this section:

# 0 1 2
# 3 4 5
# 6 7 8
v = torch.arange(9)
v = v.view(3, 3)

Concatenate, stack

# Concatenation
torch.cat((x, x, x), 0) # Concatenate in the 0 dimension

# Stack
r = torch.stack((v, v))

Gather : reorganize data element

# Gather element
# torch.gather(input, dim, index, out=None)
# out[i][j][k] = input[index[i][j][k]][j][k] # if dim == 0
# out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1
# out[i][j][k] = input[i][j][index[i][j][k]] # if dim == 2

# 0 1
# 4 3
# 8 7
r = torch.gather(v, 1, torch.LongTensor([[0,1],[1,0],[2,1]]))

Split a Tensor

https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 6/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

# Split an array into 3 chunks


# (
# 0 1 2
# [torch.FloatTensor of size 1x3]
# ,
# 3 4 5
# [torch.FloatTensor of size 1x3]
# ,
# 6 7 8
# [torch.FloatTensor of size 1x3]
# )
r = torch.chunk(v, 3)

# Split an array into chunks of at most size 2


# (
# 0 1 2
# 3 4 5
# [torch.FloatTensor of size 2x3]
# ,
# 6 7 8
# [torch.FloatTensor of size 1x3]
# )
r = torch.split(v, 2)

Index select, mask select

# Index select
# 0 2
# 3 5
# 6 8
indices = torch.LongTensor([0, 2])
r = torch.index_select(v, 1, indices) # Select element 0 and 2 for each dimension

# Masked select
# 0 0 0
# 1 1 1
# 1 1 1
mask = v.ge(3)

https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 7/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

# Size 6: 3 4 5 6 7 8
r = torch.masked_select(v, mask)

Squeeze and unsqueeze

t = torch.ones(2,1,2,1) # Size 2x1x2x1


r = torch.squeeze(t) # Size 2x2
r = torch.squeeze(t, 1) # Squeeze dimension 1: Size 2x2x1

# Un-squeeze a dimension
x = torch.Tensor([1, 2, 3])
r = torch.unsqueeze(x, 0) # Size: 1x3
r = torch.unsqueeze(x, 1) # Size: 3x1

Non-zero elements

# Non-zero
# [torch.LongTensor of size 8x2]
# [i, j] index for non-zero elements
# 0 1
# 0 2
# 1 0
# 1 1
# 1 2
# 2 0
# 2 1
# 2 2
r = torch.nonzero(v)

take

# Flatten a TensorFlow and return elements with given indexes


# Size 3: 0, 4, 2
r = torch.take(v, torch.LongTensor([0, 4, 2]))

transpose

https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 8/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

# Transpose dim 0 and 1


r = torch.transpose(v, 0, 1)

Summary

Indexing, Slicing, Joining, Mutating Ops


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: cat
.. autofunction:: chunk
.. autofunction:: gather
.. autofunction:: index_select
.. autofunction:: masked_select
.. autofunction:: nonzero
.. autofunction:: split
.. autofunction:: squeeze
.. autofunction:: stack
.. autofunction:: t - Transpose a 2-D tensor
.. autofunction:: take
.. autofunction:: transpose
.. autofunction:: unbind - Removes a tensor dimension
.. autofunction:: unsqueeze
.. autofunction:: where - Select x or y Tensor elements based on conditi

Distribution

Uniform, bernoulli, multinomial, normal distribution

# 2x2: A uniform distributed random matrix with range [0, 1]


r = torch.Tensor(2, 2).uniform_(0, 1)

# bernoulli
r = torch.bernoulli(r) # Size: 2x2. Bernoulli with probability p stored in elem

# Multinomial
w = torch.Tensor([0, 4, 8, 2]) # Create a tensor of weights
r = torch.multinomial(w, 4, replacement=True) # Size 4: 3, 2, 1, 2

# Normal distribution

https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 9/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

# From 10 means and SD


r = torch.normal(means=torch.arange(1, 11), std=torch.arange(1, 0.1, -0.1)) # Siz

Summary

Random sampling
----------------------------------
.. autofunction:: manual_seed - Set a manual seed
.. autofunction:: initial_seed - Randomize a seed by the system
.. autofunction:: get_rng_state
.. autofunction:: set_rng_state
.. autodata:: default_generator
.. autofunction:: bernoulli
.. autofunction:: multinomial
.. autofunction:: normal
.. autofunction:: rand
.. autofunction:: randn
.. autofunction:: randperm

In-place random sampling


~~~~~~~~~~~~~~~~~~~~~~~~

There are a few more in-place random sampling functions defined on Tensors as wel

- :func:`torch.Tensor.bernoulli_` - in-place version of :func:`torch.bernoulli`


- :func:`torch.Tensor.cauchy_` - numbers drawn from the Cauchy distribution
- :func:`torch.Tensor.exponential_` - numbers drawn from the exponential distribu
- :func:`torch.Tensor.geometric_` - elements drawn from the geometric distributio
- :func:`torch.Tensor.log_normal_` - samples from the log-normal distribution
- :func:`torch.Tensor.normal_` - in-place version of :func:`torch.normal`
- :func:`torch.Tensor.random_` - numbers sampled from the discrete uniform distri
- :func:`torch.Tensor.uniform_` - numbers sampled from the continuous uniform dis

Point-wise operations
### Math operations
f= torch.FloatTensor([-1, -2, 3])
r = torch.abs(f) # 1 2 3

https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 10/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

# Add x, y and scalar 10 to all elements


r = torch.add(x, 10)
r = torch.add(x, 10, y)

# Clamp the value of a Tensor


r = torch.clamp(v, min=-0.5, max=0.5)

# Element-wise divide
r = torch.div(v, v+0.03)

# Element-wise multiple
r = torch.mul(v, v)

Summary

Pointwise Ops
~~~~~~~~~~~~~~~~~~~~~~

.. autofunction:: abs
.. autofunction:: acos - arc cosine
.. autofunction:: add
.. autofunction:: addcdiv - element wise: t1 + s * t2/t3
.. autofunction:: addcmul - element wise: t1 + s * t2 * t3
.. autofunction:: asin - arc sin
.. autofunction:: atan
.. autofunction:: atan2
.. autofunction:: ceil - ceiling
.. autofunction:: clamp - clamp elements into a range
.. autofunction:: cos
.. autofunction:: cosh
.. autofunction:: div - divide
.. autofunction:: erf - Gaussian error functiom
.. autofunction:: erfinv - Inverse
.. autofunction:: exp
.. autofunction:: expm1 - exponential of each element minus 1
.. autofunction:: floor
.. autofunction:: fmod - element wise remainder of division
.. autofunction:: frac - fraction part 3.4 -> 0.4
.. autofunction:: lerp - linear interpolation
.. autofunction:: log - natural log
.. autofunction:: log1p - y = log(1 + x)
.. autofunction:: mul - multiple
https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 11/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

.. autofunction:: neg
.. autofunction:: pow
.. autofunction:: reciprocal - 1/x
.. autofunction:: remainder - remainder of division
.. autofunction:: round
.. autofunction:: rsqrt - the reciprocal of the square-root
.. autofunction:: sigmoid - sigmode(x)
.. autofunction:: sign
.. autofunction:: sin
.. autofunction:: sinh
.. autofunction:: sqrt
.. autofunction:: tan
.. autofunction:: tanh
.. autofunction:: trunc - truncated integer

Reduction operations
### Reduction operations

# Accumulate sum
# 0 1 2
# 3 5 7
# 9 12 15
r = torch.cumsum(v, dim=0)

# L-P norm
r = torch.dist(v, v+3, p=2) # L-2 norm: ((3^2)*9)^(1/2) = 9.0

# Mean
# 1 4 7
r = torch.mean(v, 1) # Size 3: Mean in dim 1

r = torch.mean(v, 1, True) # Size 3x1 since keep dimension = True

# Sum
# 3 12 21
r = torch.sum(v, 1) # Sum over dim 1

# 36
r = torch.sum(v)

https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 12/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

Summary

Reduction Ops
~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: cumprod - accumulate product of elements x1*x2*x3...
.. autofunction:: cumsum
.. autofunction:: dist - L-p norm
.. autofunction:: mean
.. autofunction:: median
.. autofunction:: mode
.. autofunction:: norm - L-p norm
.. autofunction:: prod - accumulate product
.. autofunction:: std - compute standard deviation
.. autofunction:: sum
.. autofunction:: var - variance of all elements

Comparison operation
### Comparison
# Size 3x3: Element-wise comparison
r = torch.eq(v, v)

# Max element with corresponding index


r = torch.max(v, 1)

Sort

# Sort
# Second tuple store the index
# (
# 0 1 2
# 3 4 5
# 6 7 8
# [torch.FloatTensor of size 3x3]
# ,
# 0 1 2
# 0 1 2
# 0 1 2
# [torch.LongTensor of size 3x3]
r = torch.sort(v, 1)
https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 13/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

k-th and top k

# k-th element (start from 1) ascending order with corresponding index


# (1 4 7
# [torch.FloatTensor of size 3]
# , 1 1 1
# [torch.LongTensor of size 3]
# )
r = torch.kthvalue(v, 2)

# Top k
# (
# 2 5 8
# [torch.FloatTensor of size 3x1]
# ,
# 2 2 2
# [torch.LongTensor of size 3x1]
# )
r = torch.topk(v, 1)

Comparison Ops
~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: eq - Compare elements
.. autofunction:: equal - True of 2 tensors are the same
.. autofunction:: ge - Element-wise greater or equal comparison
.. autofunction:: gt
.. autofunction:: kthvalue - k-th element
.. autofunction:: le
.. autofunction:: lt
.. autofunction:: max
.. autofunction:: min
.. autofunction:: ne
.. autofunction:: sort
.. autofunction:: topk - top k

Matrix, vector multiplication

Dot product of Tensors

https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 14/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

# Dot product of 2 tensors


r = torch.dot(torch.Tensor([4, 2]), torch.Tensor([3, 1])) # 14

Matrix, vector products

### Matrix, vector products

# Matrix X vector
# Size 2x4
mat = torch.randn(2, 4)
vec = torch.randn(4)
r = torch.mv(mat, vec)

# Matrix + Matrix X vector


# Size 2
M = torch.randn(2)
mat = torch.randn(2, 3)
vec = torch.randn(3)
r = torch.addmv(M, mat, vec)

Matrix, Matrix products

# Matrix x Matrix
# Size 2x4
mat1 = torch.randn(2, 3)
mat2 = torch.randn(3, 4)
r = torch.mm(mat1, mat2)

# Matrix + Matrix X Matrix


# Size 3x4
M = torch.randn(3, 4)
mat1 = torch.randn(3, 2)
mat2 = torch.randn(2, 4)
r = torch.addmm(M, mat1, mat2)

Outer product of vectors

# Outer product of 2 vectors


# Size 3x2
https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 15/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

v1 = torch.arange(1, 4) # Size 3
v2 = torch.arange(1, 3) # Size 2
r = torch.ger(v1, v2)

# Add M with outer product of 2 vectors


# Size 3x2
vec1 = torch.arange(1, 4) # Size 3
vec2 = torch.arange(1, 3) # Size 2
M = torch.zeros(3, 2)
r = torch.addr(M, vec1, vec2)

Batch matrix multiplication

# Batch Matrix x Matrix


# Size 10x3x5
batch1 = torch.randn(10, 3, 4)
batch2 = torch.randn(10, 4, 5)
r = torch.bmm(batch1, batch2)

# Batch Matrix + Matrix x Matrix


# Performs a batch matrix-matrix product
# 3x4 + (5x3x4 X 5x4x2 ) -> 5x3x2
M = torch.randn(3, 2)
batch1 = torch.randn(5, 3, 4)
batch2 = torch.randn(5, 4, 2)
r = torch.addbmm(M, batch1, batch2)

Other

Cross product

m1 = torch.ones(3, 5)
m2 = torch.ones(3, 5)
v1 = torch.ones(3)

# Cross product
# Size 3x5
r = torch.cross(m1, m2)

https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 16/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

Diagonal matrix

# Diagonal matrix
# Size 3x3
r = torch.diag(v1)

Histogram

# Histogram
# [0, 2, 1, 0]
torch.histc(torch.FloatTensor([1, 2, 1]), bins=4, min=0, max=3)

Renormalization

# Renormalize
# Renormalize for L-1 at dim 0 with max of 1
# 0.0000 0.3333 0.6667
# 0.2500 0.3333 0.4167
# 0.2857 0.3333 0.3810
r = torch.renorm(v, 1, 0, 1)

Summary

Other Operations
~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: cross - cross product
.. autofunction:: diag - convert vector to diagonal matrix
.. autofunction:: histc - histogram
.. autofunction:: renorm - renormalize a tensor
.. autofunction:: trace - tr(M)
.. autofunction:: tril - lower triangle of 2-D matrix
.. autofunction:: triu - uppser triangle

A summary of available operations:

Tensors
----------------------------------
.. autofunction:: is_tensor

https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 17/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

.. autofunction:: is_storage
.. autofunction:: set_default_tensor_type
.. autofunction:: numel
.. autofunction:: set_printoptions

Serialization
----------------------------------
.. autofunction:: save - Saves an object to a disk file
.. autofunction:: load - Loads an object saved with torch.save() from a

Parallelism
----------------------------------
.. autofunction:: get_num_threads - Gets the number of OpenMP threads used for pa
.. autofunction:: set_num_threads

Spectral Ops
~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: stft - Short-time Fourier transform
.. autofunction:: hann_window - Hann window function
.. autofunction:: hamming_window - Hamming window function
.. autofunction:: bartlett_window - Bartlett window function

BLAS and LAPACK Operations


~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: addbmm - Batch add and mulitply matrices nxp + b×n×m X
.. autofunction:: addmm - Add and mulitply matrices nxp + n×m X m×p ->
.. autofunction:: addmv - Add and matrix, vector multipy n + nxm X m ->
.. autofunction:: addr - Outer product of vectors
.. autofunction:: baddbmm - Batch add and mulitply matrices
.. autofunction:: bmm - Batch mulitply matrices b×n×m X b×m×p -> b×n×
.. autofunction:: btrifact - LU factorization
.. autofunction:: btrifact_with_info
.. autofunction:: btrisolve
.. autofunction:: btriunpack
.. autofunction:: dot - Dot product of 2 tensors
.. autofunction:: eig - Eigenvalues and eigenvectors ofsquare matrix
.. autofunction:: gels - Solution for least square or p-norm(AX - B)
.. autofunction:: geqrf
.. autofunction:: ger - Outer product of 2 vectors
.. autofunction:: gesv - Solve linear equations
.. autofunction:: inverse - Inverse of square matrix
.. autofunction:: det - Determinant of a 2D square Variable

https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 18/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

.. autofunction:: matmul - Matrix product of tensors


.. autofunction:: mm - Matrix multiplication
.. autofunction:: mv - Matrix vector product
.. autofunction:: orgqr - Orthogal matrix Q
.. autofunction:: ormqr - Multiplies matrix by the orthogonal Q matrix
.. autofunction:: potrf - Cholesky decomposition
.. autofunction:: potri - Inverse of a positive semidefinite matrix wit
.. autofunction:: potrs - Solve linear equation with positive semidefin
.. autofunction:: pstrf - Cholesky decomposition of a positive semidefi
.. autofunction:: qr - QR decomposition
.. autofunction:: svd - SVD decomposition
.. autofunction:: symeig - Eigenvalues and eigenvectors
.. autofunction:: trtrs - Solves a system of equations with a triangula

https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 19/20
9/4/23, 2:52 PM “PyTorch - Basic operations”

4 Comments 
1 Login

G Join the discussion…

LOG IN WITH OR SIGN UP WITH DISQUS ?

Name

 14 Share Best Newest Oldest

H
Hari K − ⚑
4 years ago

Thanks. Very useful.

0 0 Reply • Share ›

J
Jae Duk Seo − ⚑
4 years ago

thank you

0 0 Reply • Share ›

zedus − ⚑
5 years ago

Thanks! :)

0 0 Reply • Share ›

F
Farzad Sharif − ⚑
5 years ago

Beautiful

0 1 Reply • Share ›

Subscribe Privacy Do Not Sell My Data

Jonathan Hui blog jhui Deep learning

https://jhui.github.io/2018/02/09/PyTorch-Basic-operations/ 20/20

You might also like