QTensor Module

VQNet quantum machine learning uses the data structure QTensor which is Python interface. QTensor supports common multidimensional matrix operations including creating functions, mathematical functions, logical functions, matrix transformations, etc.

QTensor’s Functions and Attributes

QTensor

class yvqnet.tensor.tensor.QTensor(data, requires_grad=False, nodes=None, device=0, dtype=None, name='')

Wrapper of data structure with dynamic computational graph construction and automatic differentiation.

参数:
  • data – _core.Tensor or numpy array which represents a QTensor

  • requires_grad – should tensor’s gradient be tracked, defaults to False

  • nodes – list of successors in the computational graph, defaults to None

  • device – current device to save QTensor ,default = 0, use CPU.

  • dtype – The data type of the parameter, defaults None, use the default data type: kfloat32, which represents a 32-bit floating point number.

  • name – The name of the QTensor, default: “”.

返回:

output QTensor

备注

QTensor internal data type dtype support: kbool,kuint8,kint8,kint16,kint32,kint64,kfloat32,kfloat64,kcomplex64,kcomplex128.

Representing C++ type: bool,uint8_t,int8_t,int16_t,int32_t,int64_t,float,double,complex<float>,complex<double>.

Example:

from pyvqnet.tensor import QTensor
from pyvqnet.dtype import *
import numpy as np

t1 = QTensor(np.ones([2,3]))
t2 =  QTensor([2,3,4j,5])
t3 =  QTensor([[[2,3,4,5],[2,3,4,5]]],dtype=kbool)
print(t1)
print(t2)
print(t3)
# [[1. 1. 1.]
#  [1. 1. 1.]]
# [2.+0.j 3.+0.j 0.+4.j 5.+0.j]
# [[[ True  True  True  True]
#   [ True  True  True  True]]]
ndim

Returns the number of dimensions of a tensor.

返回:

The number of dimensions of a tensor.

Example:

from pyvqnet.tensor import QTensor

a = QTensor([2, 3, 4, 5], requires_grad=True)
print(a.ndim)

# 1
shape

Returns the dimensions of a tensor

返回:

A list of the dimensions of the tensor

Example:

from pyvqnet.tensor import QTensor

a = QTensor([2, 3, 4, 5], requires_grad=True)
print(a.shape)

# [4]
size

Returns the number of elements of a tensor.

返回:

The number of elements of a tensor.

Example:

from pyvqnet.tensor import QTensor

a = QTensor([2, 3, 4, 5], requires_grad=True)
print(a.size)

# 4
numel()

Returns the number of elements in a tensor.

返回:

The number of elements in a tensor.

Example:

from pyvqnet.tensor import QTensor

a = QTensor([2, 3, 4, 5], requires_grad=True)
print(a.numel())

# 4
dtype

Returns the data type of a tensor.

QTensor internal data type dtype supports kbool = 0, kuint8 = 1, kint8 = 2,kint16 = 3,kint32 = 4,kint64 = 5, kfloat32 = 6, kfloat64 = 7, kcomplex64 = 8, kcomplex128 = 9 .

返回:

The data type of the tensor.

Example:

from pyvqnet.tensor import QTensor

a = QTensor([2, 3, 4, 5])
print(a.dtype)
# 4
is_dense

Whether it is a dense tensor.

返回:

When the data is dense, it returns 1; otherwise it returns 0.

Example:

from pyvqnet.tensor import QTensor

a = QTensor([2, 3, 4, 5])
print(a.is_dense)
#1
is_csr

Whether it is a sparse 2-dimensional matrix in Compressed Sparse Row format.

返回:

When the data is a sparse tensor in CSR format, it returns 1; otherwise it returns 0.

Example:

from pyvqnet.tensor import QTensor,dense_to_csr

a = QTensor([[2, 3, 4, 5]])
b = dense_to_csr(a)
print(b.is_csr)
#1
zero_grad()

Sets gradient to zero. Will be used by optimizer in the optimization process.

返回:

None

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t3  =  QTensor([2,3,4,5],requires_grad = True)
t3.zero_grad()
print(t3.grad)

# [0, 0, 0, 0]
backward(grad=None)

Computes the gradient of current QTensor .

返回:

None

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

target = QTensor([[0, 0, 1, 0, 0, 0, 0, 0, 0, 0.2]], requires_grad=True)
y = 2*target + 3
y.backward()
print(target.grad)
#[[2. 2. 2. 2. 2. 2. 2. 2. 2. 2.]]
to_numpy()

Copy self data to a new numpy.array.

返回:

a new numpy.array contains QTensor data

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t3  =  QTensor([2,3,4,5],requires_grad = True)
t4 = t3.to_numpy()
print(t4)

# [2. 3. 4. 5.]
item()

Return the only element from in the QTensor.Raises ‘RuntimeError’ if QTensor has more than 1 element.

返回:

only data of this object

Example:

from pyvqnet.tensor import tensor

t = tensor.ones([1])
print(t.item())

# 1.0
argmax(*kargs)

Return the indices of the maximum value of all elements in the input QTensor,or Return the indices of the maximum values of a QTensor across a dimension.

参数:
  • dim – dim (int) – the dimension to reduce,only accepts single axis. if dim == None, returns the indices of the maximum value of all elements in the input tensor.The valid dim range is [-R, R), where R is input’s ndim. when dim < 0, it works the same way as dim + R.

  • keepdims – whether the output QTensor has dim retained or not.

返回:

the indices of the maximum value in the input QTensor.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
a = QTensor([[1.3398, 0.2663, -0.2686, 0.2450],
            [-0.7401, -0.8805, -0.3402, -1.1936],
            [0.4907, -1.3948, -1.0691, -0.3132],
            [-1.6092, 0.5419, -0.2993, 0.3195]])
flag = a.argmax()
print(flag)

# [0]

flag_0 = a.argmax([0], True)
print(flag_0)

# [
# [0, 3, 0, 3]
# ]

flag_1 = a.argmax([1], True)
print(flag_1)

# [
# [0],
# [2],
# [0],
# [1]
# ]
argmin(*kargs)

Return the indices of the minimum value of all elements in the input QTensor,or Return the indices of the minimum values of a QTensor across a dimension.

参数:
  • dim – dim (int) – the dimension to reduce,only accepts single axis. if dim == None, returns the indices of the minimum value of all elements in the input tensor.The valid dim range is [-R, R), where R is input’s ndim. when dim < 0, it works the same way as dim + R.

  • keepdims – whether the output QTensor has dim retained or not.

返回:

the indices of the minimum value in the input QTensor.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
a = QTensor([[1.3398, 0.2663, -0.2686, 0.2450],
            [-0.7401, -0.8805, -0.3402, -1.1936],
            [0.4907, -1.3948, -1.0691, -0.3132],
            [-1.6092, 0.5419, -0.2993, 0.3195]])
flag = a.argmin()
print(flag)

# [12]

flag_0 = a.argmin([0], True)
print(flag_0)

# [
# [3, 2, 2, 1]
# ]

flag_1 = a.argmin([1], False)
print(flag_1)

# [2, 3, 1, 0]
fill_(v)

Fill the QTensor with the specified value inplace.

参数:

v – a scalar value

返回:

None

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
shape = [2, 3]
value = 42
t = tensor.zeros(shape)
t.fill_(value)
print(t)

# [
# [42, 42, 42],
# [42, 42, 42]
# ]
all()

Return True, if all QTensor value is non-zero.

返回:

True,if all QTensor value is non-zero.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
shape = [2, 3]
t = tensor.zeros(shape)
t.fill_(1.0)
flag = t.all()
print(flag)

# True
any()

Return True,if any QTensor value is non-zero.

返回:

True,if any QTensor value is non-zero.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

shape = [2, 3]
t = tensor.ones(shape)
t.fill_(1.0)
flag = t.any()
print(flag)

# True
fill_rand_binary_(v=0.5)

Fills a QTensor with values randomly sampled from a binomial distribution.

If the data generated randomly after binomial distribution is greater than Binarization threshold,then the number of corresponding positions of the QTensor is set to 1, otherwise 0.

参数:

v – Binarization threshold

返回:

None

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
a = np.arange(6).reshape(2, 3).astype(np.float32)
t = QTensor(a)
t.fill_rand_binary_(2)
print(t)

# [
# [1, 1, 1],
# [1, 1, 1]
# ]
fill_rand_signed_uniform_(v=1)

Fills a QTensor with values randomly sampled from a signed uniform distribution.

Scale factor of the values generated by the signed uniform distribution.

参数:

v – a scalar value

返回:

None

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
a = np.arange(6).reshape(2, 3).astype(np.float32)
t = QTensor(a)
value = 42

t.fill_rand_signed_uniform_(value)
print(t)

# [
# [12.8852444, 4.4327269, 4.8489408],
# [-24.3309803, 26.8036957, 39.4903450]
# ]
fill_rand_uniform_(v=1)

Fills a QTensor with values randomly sampled from a uniform distribution

Scale factor of the values generated by the uniform distribution.

参数:

v – a scalar value

返回:

None

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
a = np.arange(6).reshape(2, 3).astype(np.float32)
t = QTensor(a)
value = 42
t.fill_rand_uniform_(value)
print(t)

# [
# [20.0404720, 14.4064417, 40.2955666],
# [5.5692234, 26.2520485, 35.3326073]
# ]
fill_rand_normal_(m=0, s=1, fast_math=True)

Fills a QTensor with values randomly sampled from a normal distribution Mean of the normal distribution. Standard deviation of the normal distribution. Whether to use or not the fast math mode.

参数:
  • m – mean of the normal distribution

  • s – standard deviation of the normal distribution

  • fast_math – True if use fast-math

返回:

None

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
a = np.arange(6).reshape(2, 3).astype(np.float32)
t = QTensor(a)
t.fill_rand_normal_(2, 10, True)
print(t)

# [
# [-10.4446531    4.9158096   2.9204607],
# [ -7.2682705   8.1267328    6.2758742 ],
# ]
transpose(new_dims=None)

Reverse or permute the axes of an array.if new_dims = None, revsers the dim.

参数:

new_dims – the new order of the dimensions (list of integers).

返回:

result QTensor.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
R, C = 3, 4
a = np.arange(R * C).reshape([2, 2, 3]).astype(np.float32)
t = QTensor(a)
rlt = t.transpose([2,0,1])
print(rlt)
# [
# [[0, 3],
#  [6, 9]],
# [[1, 4],
#  [7, 10]],
# [[2, 5],
#  [8, 11]]
# ]
reshape(new_shape)

Change the tensor’s shape ,return a new QTensor.

参数:

new_shape – the new shape (list of integers)

返回:

a new QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
R, C = 3, 4
a = np.arange(R * C).reshape(R, C).astype(np.float32)
t = QTensor(a)
reshape_t = t.reshape([C, R])
print(reshape_t)
# [
# [0, 1, 2],
# [3, 4, 5],
# [6, 7, 8],
# [9, 10, 11]
# ]
reshape_(new_shape)

Change the shape of the current QTensor in place. This interface will first try to transform without changing the original memory data. If it fails, the current data will be copied to the new memory.

警告

It is recommended to use the reshape interface. In some cases, the actual underlying memory location will be copied instead of modified in place.

参数:

new_shape – the new shape (list of integers)

返回:

None

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
R, C = 3, 4
a = np.arange(R * C).reshape(R, C).astype(np.float32)
t = QTensor(a)
t.reshape_([C, R])
print(t)

# [
# [0, 1, 2],
# [3, 4, 5],
# [6, 7, 8],
# [9, 10, 11]
# ]
getdata()

Get the QTensor’s data as a NumPy array.

返回:

a NumPy array

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

t = tensor.ones([3, 4])
a = t.getdata()
print(a)

# [[1. 1. 1. 1.]
#  [1. 1. 1. 1.]
#  [1. 1. 1. 1.]]
__getitem__()

Slicing indexing of QTensor is supported, or using QTensor as advanced index access input. A new QTensor will be returned.

The parameters start, stop, and step can be separated by a colon,such as start:stop:step, where start, stop, and step can be default

As a 1-D QTensor,indexing or slicing can only be done on a single axis.

As a 2-D QTensor and a multidimensional QTensor,indexing or slicing can be done on multiple axes.

If you use QTensor as an index for advanced indexing, see numpy for advanced indexing .

If your QTensor as an index is the result of a logical operation, then you do a Boolean index.

备注

We use an index form like a[3,4,1],but the form a[3][4][1] is not supported.

参数:

item – A integer or QTensor as an index.

返回:

A new QTensor.

Example:

from pyvqnet.tensor import tensor, QTensor
aaa = tensor.arange(1, 61)
aaa = aaa.reshape([4, 5, 3])
print(aaa[0:2, 3, :2])
# [
# [10, 11],
#  [25, 26]
# ]
print(aaa[3, 4, 1])
#[59]
print(aaa[:, 2, :])
# [
# [7, 8, 9],
#  [22, 23, 24],
#  [37, 38, 39],
#  [52, 53, 54]
# ]
print(aaa[2])
# [
# [31, 32, 33],
#  [34, 35, 36],
#  [37, 38, 39],
#  [40, 41, 42],
#  [43, 44, 45]
# ]
print(aaa[0:2, ::3, 2:])
# [
# [[3],
#  [12]],
# [[18],
#  [27]]
# ]
a = tensor.ones([2, 2])
b = QTensor([[1, 1], [0, 1]])
b = b > 0
c = a[b]
print(c)
#[1, 1, 1]
tt = tensor.arange(1, 56 * 2 * 4 * 4 + 1).reshape([2, 8, 4, 7, 4])
tt.requires_grad = True
index_sample1 = tensor.arange(0, 3).reshape([3, 1])
index_sample2 = QTensor([0, 1, 0, 2, 3, 2, 2, 3, 3]).reshape([3, 3])
gg = tt[:, index_sample1, 3:, index_sample2, 2:]
print(gg)
# [
# [[[[87, 88]],
# [[983, 984]]],
# [[[91, 92]],
# [[987, 988]]],
# [[[87, 88]],
# [[983, 984]]]],
# [[[[207, 208]],
# [[1103, 1104]]],
# [[[211, 212]],
# [[1107, 1108]]],
# [[[207, 208]],
# [[1103, 1104]]]],
# [[[[319, 320]],
# [[1215, 1216]]],
# [[[323, 324]],
# [[1219, 1220]]],
# [[[323, 324]],
# [[1219, 1220]]]]
# ]
__setitem__()

Slicing indexing of QTensor is supported, or using QTensor as advanced index access input. A new QTensor will be returned.

The parameters start, stop, and step can be separated by a colon,such as start:stop:step, where start, stop, and step can be default

As a 1-D QTensor,indexing or slicing can only be done on a single axis.

As a 2-D QTensor and a multidimensional QTensor,indexing or slicing can be done on multiple axes.

If you use QTensor as an index for advanced indexing, see numpy for advanced indexing .

If your QTensor as an index is the result of a logical operation, then you do a Boolean index.

备注

We use an index form like a[3,4,1],but the form a[3][4][1] is not supported.

参数:

item – A integer or QTensor as an index

返回:

None

Example:

from pyvqnet.tensor import tensor
aaa = tensor.arange(1, 61)
aaa = aaa.reshape([4, 5, 3])
vqnet_a2 = aaa[3, 4, 1]
aaa[3, 4, 1] = tensor.arange(10001,
                                10001 + vqnet_a2.size).reshape(vqnet_a2.shape)
print(aaa)
# [
# [[1, 2, 3],
#  [4, 5, 6],
#  [7, 8, 9],
#  [10, 11, 12],
#  [13, 14, 15]],
# [[16, 17, 18],
#  [19, 20, 21],
#  [22, 23, 24],
#  [25, 26, 27],
#  [28, 29, 30]],
# [[31, 32, 33],
#  [34, 35, 36],
#  [37, 38, 39],
#  [40, 41, 42],
#  [43, 44, 45]],
# [[46, 47, 48],
#  [49, 50, 51],
#  [52, 53, 54],
#  [55, 56, 57],
#  [58, 10001, 60]]
# ]
aaa = tensor.arange(1, 61)
aaa = aaa.reshape([4, 5, 3])
vqnet_a3 = aaa[:, 2, :]
aaa[:, 2, :] = tensor.arange(10001,
                                10001 + vqnet_a3.size).reshape(vqnet_a3.shape)
print(aaa)
# [
# [[1, 2, 3],
#  [4, 5, 6],
#  [10001, 10002, 10003],
#  [10, 11, 12],
#  [13, 14, 15]],
# [[16, 17, 18],
#  [19, 20, 21],
#  [10004, 10005, 10006],
#  [25, 26, 27],
#  [28, 29, 30]],
# [[31, 32, 33],
#  [34, 35, 36],
#  [10007, 10008, 10009],
#  [40, 41, 42],
#  [43, 44, 45]],
# [[46, 47, 48],
#  [49, 50, 51],
#  [10010, 10011, 10012],
#  [55, 56, 57],
#  [58, 59, 60]]
# ]
aaa = tensor.arange(1, 61)
aaa = aaa.reshape([4, 5, 3])
vqnet_a4 = aaa[2, :]
aaa[2, :] = tensor.arange(10001,
                            10001 + vqnet_a4.size).reshape(vqnet_a4.shape)
print(aaa)
# [
# [[1, 2, 3],
#  [4, 5, 6],
#  [7, 8, 9],
#  [10, 11, 12],
#  [13, 14, 15]],
# [[16, 17, 18],
#  [19, 20, 21],
#  [22, 23, 24],
#  [25, 26, 27],
#  [28, 29, 30]],
# [[10001, 10002, 10003],
#  [10004, 10005, 10006],
#  [10007, 10008, 10009],
#  [10010, 10011, 10012],
#  [10013, 10014, 10015]],
# [[46, 47, 48],
#  [49, 50, 51],
#  [52, 53, 54],
#  [55, 56, 57],
#  [58, 59, 60]]
# ]
aaa = tensor.arange(1, 61)
aaa = aaa.reshape([4, 5, 3])
vqnet_a5 = aaa[0:2, ::2, 1:2]
aaa[0:2, ::2,
    1:2] = tensor.arange(10001,
                            10001 + vqnet_a5.size).reshape(vqnet_a5.shape)
print(aaa)
# [
# [[1, 10001, 3],
#  [4, 5, 6],
#  [7, 10002, 9],
#  [10, 11, 12],
#  [13, 10003, 15]],
# [[16, 10004, 18],
#  [19, 20, 21],
#  [22, 10005, 24],
#  [25, 26, 27],
#  [28, 10006, 30]],
# [[31, 32, 33],
#  [34, 35, 36],
#  [37, 38, 39],
#  [40, 41, 42],
#  [43, 44, 45]],
# [[46, 47, 48],
#  [49, 50, 51],
#  [52, 53, 54],
#  [55, 56, 57],
#  [58, 59, 60]]
# ]
a = tensor.ones([2, 2])
b = tensor.QTensor([[1, 1], [0, 1]])
b = b > 0
x = tensor.QTensor([1001, 2001, 3001])

a[b] = x
print(a)
# [
# [1001, 2001],
#  [1, 3001]
# ]
GPU(device: int = DEV_GPU_0)

Clone QTensor to specified GPU device.

device specifies the device whose internal data is stored. When device >= DEV_GPU_0, the data is stored on the GPU. If your computer has multiple GPUs, you can designate different devices to store data on. For example, device = DEV_GPU_1, DEV_GPU_2, DEV_GPU_3, … indicates storage on GPUs with different serial numbers.

备注

QTensor cannot perform calculations on different GPUs. A Cuda error will be raised if you try to create a QTensor on a GPU whose ID exceeds the maximum number of verified GPUs.

参数:

device – The device currently saving QTensor, default=DEV_GPU_0,

device = pyvqnet.DEV_GPU_0, stored in the first GPU, devcie = DEV_GPU_1, stored in the second GPU, and so on.

返回:

Clone QTensor to GPU device.

Examples:

from pyvqnet.tensor import QTensor
a = QTensor([2])
b = a.GPU()
print(b.device)
#1000
CPU()

Clone QTensor to specific CPU device

返回:

Clone QTensor to CPU device.

Examples:

from pyvqnet.tensor import QTensor
a = QTensor([2])
b = a.CPU()
print(b.device)
# 0
toGPU(device: int = DEV_GPU_0)

Move QTensor to specified GPU device.

device specifies the device whose internal data is stored. When device >= DEV_GPU, the data is stored on the GPU. If your computer has multiple GPUs, you can designate different devices to store data on. For example, device = DEV_GPU_1, DEV_GPU_2, DEV_GPU_3, … indicates storage on GPUs with different serial numbers.

备注

QTensor cannot perform calculations on different GPUs. A Cuda error will be raised if you try to create a QTensor on a GPU whose ID exceeds the maximum number of verified GPUs.

参数:

device – The device currently saving QTensor, default=DEV_GPU_0. device = pyvqnet.DEV_GPU_0, stored in the first GPU, devcie = DEV_GPU_1, stored in the second GPU, and so on.

返回:

QTensor moved to GPU device.

Examples:

from pyvqnet.tensor import QTensor
a = QTensor([2])
a = a.toGPU()
print(a.device)
#1000
toCPU()

Move QTensor to specific GPU device

返回:

QTensor moved to CPU device.

Examples:

from pyvqnet.tensor import QTensor
a = QTensor([2])
b = a.toCPU()
print(b.device)
# 0
isGPU()

Whether this QTensor’s data is stored on GPU host memory.

返回:

Whether this QTensor’s data is stored on GPU host memory.

Examples:

from pyvqnet.tensor import QTensor
a = QTensor([2])
a = a.isGPU()
print(a)
# False
isCPU()

Whether this QTensor’s data is stored in CPU host memory.

返回:

Whether this QTensor’s data is stored in CPU host memory.

Examples:

from pyvqnet.tensor import QTensor
a = QTensor([2])
a = a.isCPU()
print(a)
# True

Create Functions

ones

pyvqnet.tensor.ones(shape, device=0, dtype-None)

Return one-tensor with the input shape.

参数:
  • shape – input shape

  • device – stored in which device,default 0 , CPU.

  • dtype – The data type of the parameter, defaults None, use the default data type: kfloat32, which represents a 32-bit floating point number.

返回:

output QTensor with the input shape.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
x = tensor.ones([2,3])
print(x)

# [
# [1, 1, 1],
# [1, 1, 1]
# ]

ones_like

pyvqnet.tensor.ones_like(t: pyvqnet.tensor.QTensor, device=0, dtype=None)

Return one-tensor with the same shape as the input QTensor.

参数:
  • t – input QTensor

  • device – stored in which device,default 0 , CPU.

  • dtype – The data type of the parameter, defaults None, use the default data type: kfloat32, which represents a 32-bit floating point number.

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([1, 2, 3])
x = tensor.ones_like(t)
print(x)

# [1, 1, 1]

full

pyvqnet.tensor.full(shape, value, device=0, dtype=None)

Create a QTensor of the specified shape and fill it with value.

参数:
  • shape – shape of the QTensor to create

  • value – value to fill the QTensor with.

  • device – device to use,default = 0 ,use cpu device.

  • dtype – The data type of the parameter, defaults None, use the default data type: kfloat32, which represents a 32-bit floating point number.

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
shape = [2, 3]
value = 42
t = tensor.full(shape, value)
print(t)
# [
# [42, 42, 42],
# [42, 42, 42]
# ]

full_like

pyvqnet.tensor.full_like(t, value, device: int = 0, dtype=None)

Create a QTensor of the specified shape and fill it with value.

参数:
  • t – input Qtensor

  • value – value to fill the QTensor with.

  • device – device to use,default = 0 ,use cpu device.

  • dtype – The data type of the parameter, defaults None, use the default data type: kfloat32, which represents a 32-bit floating point number.

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
a = tensor.randu([3,5])
value = 42
t = tensor.full_like(a, value)
print(t)
# [
# [42, 42, 42, 42, 42],
# [42, 42, 42, 42, 42],
# [42, 42, 42, 42, 42]
# ]

zeros

pyvqnet.tensor.zeros(shape, device=0, dtype=None)

Return zero-tensor of the input shape.

参数:
  • shape – shape of tensor

  • device – device to use,default = 0 ,use cpu device

  • dtype – The data type of the parameter, defaults None, use the default data type: kfloat32, which represents a 32-bit floating point number.

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = tensor.zeros([2, 3, 4])
print(t)
# [
# [[0, 0, 0, 0],
#  [0, 0, 0, 0],
#  [0, 0, 0, 0]],
# [[0, 0, 0, 0],
#  [0, 0, 0, 0],
#  [0, 0, 0, 0]]
# ]

zeros_like

pyvqnet.tensor.zeros_like(t: pyvqnet.tensor.QTensor, device: int = 0, dtype=None))

Return zero-tensor with the same shape as the input QTensor.

参数:
  • t – input QTensor

  • device – device to use,default = 0 ,use cpu device

  • dtype – The data type of the parameter, defaults None, use the default data type: kfloat32, which represents a 32-bit floating point number.

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([1, 2, 3])
x = tensor.zeros_like(t)
print(x)

# [0, 0, 0]

arange

pyvqnet.tensor.arange(start, end, step=1, device: int = 0, dtype=None, requires_grad=False)

Create a 1D QTensor with evenly spaced values within a given interval.

参数:
  • start – start of interval

  • end – end of interval

  • step – spacing between values

  • device – device to use,default = 0 ,use cpu device

  • dtype – The data type of the parameter, defaults None, use the default data type: kfloat32, which represents a 32-bit floating point number.

  • requires_grad – should tensor’s gradient be tracked, defaults to False

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = tensor.arange(2, 30,4)
print(t)

# [ 2,  6, 10, 14, 18, 22, 26]

linspace

pyvqnet.tensor.linspace(start, end, num, device: int = 0, dtype=None, requires_grad=False)

Create a 1D QTensor with evenly spaced values within a given interval.

参数:
  • start – starting value

  • end – end value

  • nums – number of samples to generate

  • device – device to use,default = 0 ,use cpu device

  • dtype – The data type of the parameter, defaults None, use the default data type: kfloat32, which represents a 32-bit floating point number.

  • requires_grad – should tensor’s gradient be tracked, defaults to False

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
start, stop, steps = -2.5, 10, 10
t = tensor.linspace(start, stop, steps)
print(t)
#[-2.5000000, -1.1111112, 0.2777777, 1.6666665, 3.0555553, 4.4444442, 5.8333330, 7.2222219, 8.6111107, 10]

logspace

pyvqnet.tensor.logspace(start, end, num, base, device: int = 0, dtype=None, requires_grad)

Create a 1D QTensor with evenly spaced values on a log scale.

参数:
  • startbase ** start is the starting value

  • endbase ** end is the final value of the sequence

  • nums – number of samples to generate

  • base – the base of the log space

  • device – device to use,default = 0 ,use cpu device

  • dtype – The data type of the parameter, defaults None, use the default data type: kfloat32, which represents a 32-bit floating point number.

  • requires_grad – should tensor’s gradient be tracked, defaults to False

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
start, stop, num, base = 0.1, 1.0, 5, 10.0
t = tensor.logspace(start, stop, num, base)
print(t)

# [1.2589254, 2.1134889, 3.5481336, 5.9566211, 10]

eye

pyvqnet.tensor.eye(size, offset: int = 0, device=0, dtype=None)

Create a size x size QTensor with ones on the diagonal and zeros elsewhere.

参数:
  • size – size of the (square) QTensor to create

  • offset – Index of the diagonal: 0 (the default) refers to the main diagonal, a positive value refers to an upper diagonal, and a negative value to a lower diagonal.

  • device – device to use,default = 0 ,use cpu device

  • dtype – The data type of the parameter, defaults None, use the default data type: kfloat32, which represents a 32-bit floating point number.

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
size = 3
t = tensor.eye(size)
print(t)

# [
# [1, 0, 0],
# [0, 1, 0],
# [0, 0, 1]
# ]

diagonal

pyvqnet.tensor.diagonal(t: QTensor, offset: int = 0, dim1=0, dim2=1)

Returns a partial view of t with the diagonal elements appended as dimensions to the end of the shape relative to dim1 and dim2. offset is the offset of the main diagonal.

参数:
  • t – input tensor

  • offset – offset (0 means main diagonal, positive values ​​mean the nth diagonal above the main diagonal, negative values ​​mean the nth diagonal below the main diagonal)

  • dim1 – first dimension to take the diagonal. Default: 0.

  • dim2 – second dimension to take the diagonal. Default: 1.

Example:

from pyvqnet.tensor import randn,diagonal

x = randn((2, 5, 4, 2))
diagonal_elements = diagonal(x, offset=-1, dim1=1, dim2=2)
print(diagonal_elements)
# [[[-0.4641751,-0.1410288,-0.1215512, 0.5423283],
#   [ 0.9556418, 0.0376572, 1.2571657, 0.8268463]],

#  [[-0.7972266, 0.2080281,-0.1157126,-0.7342224],
#   [ 1.1039937, 0.4700735, 1.0219841,-0.146358 ]]]

diag

pyvqnet.tensor.diag(t, k: int = 0)

Select diagonal elements or construct a diagonal QTensor.

Input a 2-D QTensor and return a new 1D tensor containing the selected diagonal elements. Input a 1-D QTensor and return a new 2D tensor whose selected diagonal elements are the input values ​​and the rest are 0

参数:
  • t – input QTensor

  • k – offset (0 for the main diagonal, positive for the nth diagonal above the main one, negative for the nth diagonal below the main one)

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
a = np.arange(16).reshape(4, 4).astype(np.float32)
t = QTensor(a)
for k in range(-3, 4):
    u = tensor.diag(t,k=k)
    print(u)
# [12.]
# <QTensor [1] DEV_CPU kfloat32>

# [ 8.,13.]
# <QTensor [2] DEV_CPU kfloat32>

# [ 4., 9.,14.]
# <QTensor [3] DEV_CPU kfloat32>

# [ 0., 5.,10.,15.]
# <QTensor [4] DEV_CPU kfloat32>

# [ 1., 6.,11.]
# <QTensor [3] DEV_CPU kfloat32>

# [2.,7.]
# <QTensor [2] DEV_CPU kfloat32>

# [3.]
# <QTensor [1] DEV_CPU kfloat32>

randu

pyvqnet.tensor.randu(shape, min=0.0, max=1.0, device: int = 0, dtype=None, requires_grad=False)

Create a QTensor with uniformly distributed random values.

参数:
  • shape – shape of the QTensor to create

  • min – minimum value of uniform distribution,default: 0.

  • max – maximum value of uniform distribution,default: 1.

  • device – device to use,default = 0 ,use cpu device

  • dtype – The data type of the parameter, defaults None, use the default data type: kfloat32, which represents a 32-bit floating point number.

  • requires_grad – should tensor’s gradient be tracked, defaults to False

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
shape = [2, 3]
t = tensor.randu(shape)
print(t)

# [
# [0.0885886, 0.9570093, 0.8304565],
# [0.6055251, 0.8721224, 0.1927866]
# ]

randn

pyvqnet.tensor.randn(shape, mean=0.0, std=1.0, device: int = 0, dtype=None, requires_grad=False)

Create a QTensor with normally distributed random values.

参数:
  • shape – shape of the QTensor to create

  • mean – mean value of normally distribution,default: 0.

  • std – standard variance value of normally distribution,default: 1.

  • device – device to use,default = 0 ,use cpu device

  • dtype – The data type of the parameter, defaults None, use the default data type: kfloat32, which represents a 32-bit floating point number.

  • requires_grad – should tensor’s gradient be tracked, defaults to False

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
shape = [2, 3]
t = tensor.randn(shape)
print(t)

# [
# [-0.9529880, -0.4947567, -0.6399882],
# [-0.6987777, -0.0089036, -0.5084590]
# ]

binomial

pyvqnet.tensor.binomial(total_countst, probs)

Creates a binomial distribution parameterized by :attr:total_count and :attr:probs.

参数:
  • total_counts – Number of Bernoulli trials.

  • probs – Event probabilities.

返回:

QTensor for binomial distribution.

Example:

import pyvqnet.tensor as tensor

a = tensor.randu([3,4])
b = 1000

c = tensor.binomial(b,a)
print(c)

# [[221.,763., 30.,339.],
# [803.,899.,105.,356.],
# [550.,688.,828.,493.]]

multinomial

pyvqnet.tensor.multinomial(t, num_samples)

Returns a Tensor where each row contains num_samples indexed samples. From the multinomial probability distribution located in the corresponding row of the tensor input.

参数:
  • t – Input probability distribution。

  • num_samples – numbers of sample。

返回:

output sample index

Examples:

from pyvqnet import tensor
weights = tensor.QTensor([0.1,10, 3, 1])
idx = tensor.multinomial(weights,3)
print(idx)

from pyvqnet import tensor
weights = tensor.QTensor([0,10, 3, 2.2,0.0])
idx = tensor.multinomial(weights,3)
print(idx)

# [1 0 3]
# [1 3 2]

triu

pyvqnet.tensor.triu(t, diagonal=0)

Returns the upper triangular matrix of input t, with the rest set to 0.

参数:
  • t – input a QTensor

  • diagonal – The Offset default =0. Main diagonal is 0, positive is offset up,and negative is offset down

返回:

output a QTensor

Examples:

from pyvqnet.tensor import tensor
a = tensor.arange(1.0, 2 * 6 * 5 + 1.0).reshape([2, 6, 5])
u = tensor.triu(a, 1)
print(u)
# [
# [[0, 2, 3, 4, 5],
#  [0, 0, 8, 9, 10],
#  [0, 0, 0, 14, 15],
#  [0, 0, 0, 0, 20],
#  [0, 0, 0, 0, 0],
#  [0, 0, 0, 0, 0]],
# [[0, 32, 33, 34, 35],
#  [0, 0, 38, 39, 40],
#  [0, 0, 0, 44, 45],
#  [0, 0, 0, 0, 50],
#  [0, 0, 0, 0, 0],
#  [0, 0, 0, 0, 0]]
# ]

tril

pyvqnet.tensor.tril(t, diagonal=0)

Returns the lower triangular matrix of input t, with the rest set to 0.

参数:
  • t – input a QTensor

  • diagonal – The Offset default =0. Main diagonal is 0, positive is offset up,and negative is offset down

返回:

output a QTensor

Examples:

from pyvqnet.tensor import tensor
a = tensor.arange(1.0, 2 * 6 * 5 + 1.0).reshape([12, 5])
u = tensor.tril(a, 1)
print(u)
# [
# [1, 2, 0, 0, 0],
#  [6, 7, 8, 0, 0],
#  [11, 12, 13, 14, 0],
#  [16, 17, 18, 19, 20],
#  [21, 22, 23, 24, 25],
#  [26, 27, 28, 29, 30],
#  [31, 32, 33, 34, 35],
#  [36, 37, 38, 39, 40],
#  [41, 42, 43, 44, 45],
#  [46, 47, 48, 49, 50],
#  [51, 52, 53, 54, 55],
#  [56, 57, 58, 59, 60]
# ]

Math Functions

floor

pyvqnet.tensor.floor(t)

Return a new QTensor with the floor of the elements of input, the largest integer less than or equal to each element.

参数:

t – input Qtensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor

t = tensor.arange(-2.0, 2.0, 0.25)
u = tensor.floor(t)
print(u)

# [-2, -2, -2, -2, -1, -1, -1, -1, 0, 0, 0, 0, 1, 1, 1, 1]

ceil

pyvqnet.tensor.ceil(t)

Return a new QTensor with the ceil of the elements of input, the smallest integer greater than or equal to each element.

参数:

t – input Qtensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor

t = tensor.arange(-2.0, 2.0, 0.25)
u = tensor.ceil(t)
print(u)

# [-2, -1, -1, -1, -1, -0, -0, -0, 0, 1, 1, 1, 1, 2, 2, 2]

round

pyvqnet.tensor.round(t)

Round QTensor values to the nearest integer.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor

t = tensor.arange(-2.0, 2.0, 0.4)
u = tensor.round(t)
print(u)

# [-2, -2, -1, -1, -0, -0, 0, 1, 1, 2]

sort

pyvqnet.tensor.sort(t, axis: int, descending=False, stable=True)

Sort QTensor along the axis

参数:
  • t – input QTensor

  • axis – sort axis

  • descending – sort order if desc

  • stable – Whether to use stable sorting or not

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
a = np.random.randint(10, size=24).reshape(3,8).astype(np.float32)
A = QTensor(a)
AA = tensor.sort(A,1,False)
print(AA)

# [
# [0, 1, 2, 4, 6, 7, 8, 8],
# [2, 5, 5, 8, 9, 9, 9, 9],
# [1, 2, 5, 5, 5, 6, 7, 7]
# ]

argsort

pyvqnet.tensor.argsort(t, axis: int, descending=False, stable=True)

Return an array of indices of the same shape as input that index data along the given axis in sorted order.

参数:
  • t – input QTensor

  • axis – sort axis

  • descending – sort order if desc

  • stable – Whether to use stable sorting or not

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
a = np.random.randint(10, size=24).reshape(3,8).astype(np.float32)
A = QTensor(a)
bb = tensor.argsort(A,1,False)
print(bb)

# [
# [4, 0, 1, 7, 5, 3, 2, 6],
#  [3, 0, 7, 6, 2, 1, 4, 5],
#  [4, 7, 5, 0, 2, 1, 3, 6]
# ]

topK

pyvqnet.tensor.topK(t, k, axis=-1, if_descent=True)

Returns the k largest elements of the input tensor along the given axis.

If if_descent is False,then return k smallest elements.

参数:
  • t – input a QTensor

  • k – numbers of largest elements or smallest elements

  • axis – sort axis,default = -1,the last axis

  • if_descent – sort order,defaults to True

返回:

A new QTensor

Examples:

from pyvqnet.tensor import tensor, QTensor
x = QTensor([
    24., 13., 15., 4., 3., 8., 11., 3., 6., 15., 24., 13., 15., 3., 3., 8., 7.,
    3., 6., 11.
])
x= x.reshape([2, 5, 1, 2])
x.requires_grad = True
y = tensor.topK(x, 3, 1)
print(y)
# [
# [[[24, 15]],
# [[15, 13]],
# [[11, 8]]],
# [[[24, 13]],
# [[15, 11]],
# [[7, 8]]]
# ]

argtopK

pyvqnet.tensor.argtopK(t, k, axis=-1, if_descent=True)

Return the index of the k largest elements along the given axis of the input tensor.

If if_descent is False,then return the index of k smallest elements.

参数:
  • t – input a QTensor

  • k – numbers of largest elements or smallest elements

  • axis – sort axis,default = -1,the last axis

  • if_descent – sort order,defaults to True

返回:

A new QTensor

Examples:

from pyvqnet.tensor import tensor, QTensor
x = QTensor([
    24., 13., 15., 4., 3., 8., 11., 3., 6., 15., 24., 13., 15., 3., 3., 8., 7.,
    3., 6., 11.
])
x= x.reshape([2, 5, 1, 2])
x.requires_grad = True
y = tensor.argtopK(x, 3, 1)
print(y)
# [
# [[[0, 4]],
# [[1, 0]],
# [[3, 2]]],
# [[[0, 0]],
# [[1, 4]],
# [[3, 2]]]
# ]

add

pyvqnet.tensor.add(t1: pyvqnet.tensor.QTensor, t2: pyvqnet.tensor.QTensor)

Element-wise adds two QTensors, equivalent to t1 + t2.

参数:
  • t1 – first QTensor

  • t2 – second QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t1 = QTensor([1, 2, 3])
t2 = QTensor([4, 5, 6])
x = tensor.add(t1, t2)
print(x)

# [5, 7, 9]

sub

pyvqnet.tensor.sub(t1: pyvqnet.tensor.QTensor, t2: pyvqnet.tensor.QTensor)

Element-wise subtracts two QTensors, equivalent to t1 - t2.

参数:
  • t1 – first QTensor

  • t2 – second QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t1 = QTensor([1, 2, 3])
t2 = QTensor([4, 5, 6])
x = tensor.sub(t1, t2)
print(x)

# [-3, -3, -3]

mul

pyvqnet.tensor.mul(t1: pyvqnet.tensor.QTensor, t2: pyvqnet.tensor.QTensor)

Element-wise multiplies two QTensors, equivalent to t1 * t2.

参数:
  • t1 – first QTensor

  • t2 – second QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t1 = QTensor([1, 2, 3])
t2 = QTensor([4, 5, 6])
x = tensor.mul(t1, t2)
print(x)

# [4, 10, 18]

divide

pyvqnet.tensor.divide(t1: pyvqnet.tensor.QTensor, t2: pyvqnet.tensor.QTensor)

Element-wise divides two QTensors, equivalent to t1 / t2.

参数:
  • t1 – first QTensor

  • t2 – second QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t1 = QTensor([1, 2, 3])
t2 = QTensor([4, 5, 6])
x = tensor.divide(t1, t2)
print(x)

# [0.2500000, 0.4000000, 0.5000000]

sums

pyvqnet.tensor.sums(t: pyvqnet.tensor.QTensor, axis: int | None = None, keepdims=False)

Sums all the elements in QTensor along given axis.if axis = None, sums all the elements in QTensor.

参数:
  • t – input QTensor

  • axis – axis used to sums, defaults to None

  • keepdims – whether the output tensor has dim retained or not. - defaults to False

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor(([1, 2, 3], [4, 5, 6]))
x = tensor.sums(t)
print(x)

# [21]

cumsum

pyvqnet.tensor.cumsum(t, axis=-1)

Return the cumulative sum of input elements in the dimension axis.

参数:
  • t – the input QTensor

  • axis – Calculation of the axis,defaults to -1,use the last axis

返回:

output QTensor.

Example:

from pyvqnet.tensor import tensor, QTensor
t = QTensor(([1, 2, 3], [4, 5, 6]))
x = tensor.cumsum(t,-1)
print(x)
# [
# [1, 3, 6],
# [4, 9, 15]
# ]

mean

pyvqnet.tensor.mean(t: pyvqnet.tensor.QTensor, axis=None, keepdims=False)

Obtain the mean values in the QTensor along the axis.

参数:
  • t – the input QTensor.

  • axis – the dimension to reduce.

  • keepdims – whether the output QTensor has dim retained or not, defaults to False.

返回:

returns the mean value of the input QTensor.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([[1, 2, 3], [4, 5, 6.0]])
x = tensor.mean(t, axis=1)
print(x)

# [2., 5.]

median

pyvqnet.tensor.median(t: pyvqnet.tensor.QTensor, axis=None, keepdims=False)

Obtain the median value in the QTensor.

参数:
  • t – the input QTensor

  • axis – An axis for averaging,defaults to None

  • keepdims – whether the output QTensor has dim retained or not, defaults to False

返回:

Return the median of the values in input or QTensor.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

a = QTensor([[1.5219, -1.5212,  0.2202]])
median_a = tensor.median(a)
print(median_a)

# [0.2202000]

b = QTensor([[0.2505, -0.3982, -0.9948,  0.3518, -1.3131],
            [0.3180, -0.6993,  1.0436,  0.0438,  0.2270],
            [-0.2751,  0.7303,  0.2192,  0.3321,  0.2488],
            [1.0778, -1.9510,  0.7048,  0.4742, -0.7125]])
median_b = tensor.median(b,1, False)
print(median_b)

# [-0.3982000, 0.2270000, 0.2488000, 0.4742000]

std

pyvqnet.tensor.std(t: pyvqnet.tensor.QTensor, axis=None, keepdims=False, unbiased=True)

Obtain the standard variance value in the QTensor.

参数:
  • t – the input QTensor

  • axis – the axis used to calculate the standard deviation,defaults to None

  • keepdims – whether the output QTensor has dim retained or not, defaults to False

  • unbiased – whether to use Bessel’s correction,default true

返回:

Return the standard variance of the values in input or QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

a = QTensor([[-0.8166, -1.3802, -0.3560]])
std_a = tensor.std(a)
print(std_a)

# [0.5129624]

b = QTensor([[0.2505, -0.3982, -0.9948,  0.3518, -1.3131],
            [0.3180, -0.6993,  1.0436,  0.0438,  0.2270],
            [-0.2751,  0.7303,  0.2192,  0.3321,  0.2488],
            [1.0778, -1.9510,  0.7048,  0.4742, -0.7125]])
std_b = tensor.std(b, 1, False, False)
print(std_b)

# [0.6593542, 0.5583112, 0.3206565, 1.1103367]

var

pyvqnet.tensor.var(t: pyvqnet.tensor.QTensor, axis=None, keepdims=False, unbiased=True)

Obtain the variance in the QTensor.

参数:
  • t – the input QTensor.

  • axis – The axis used to calculate the variance,defaults to None

  • keepdims – whether the output QTensor has dim retained or not, defaults to False.

  • unbiased – whether to use Bessel’s correction,default true.

返回:

Obtain the variance in the QTensor.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

a = QTensor([[-0.8166, -1.3802, -0.3560]])
a_var = tensor.var(a)
print(a_var)

# [0.2631305]

matmul

pyvqnet.tensor.matmul(t1: pyvqnet.tensor.QTensor, t2: pyvqnet.tensor.QTensor)

Matrix multiplications of two 2d , 3d , 4d matrix.

参数:
  • t1 – first QTensor

  • t2 – second QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t1 = tensor.ones([2,3])
t1.requires_grad = True
t2 = tensor.ones([3,4])
t2.requires_grad = True
t3  = tensor.matmul(t1,t2)
t3.backward(tensor.ones_like(t3))
print(t1.grad)

# [
# [4, 4, 4],
#  [4, 4, 4]
# ]

print(t2.grad)

# [
# [2, 2, 2, 2],
#  [2, 2, 2, 2],
#  [2, 2, 2, 2]
# ]

kron

pyvqnet.tensor.kron(t1: pyvqnet.tensor.QTensor, t2: pyvqnet.tensor.QTensor)

Computes the Kronecker product of t1 and t2, expressed in \(\otimes\) . If t1 is a \((a_0 \times a_1 \times \dots \times a_n)\) tensor and t2 is a \((b_0 \times b_1 \times \dots \ times b_n)\) tensor, the result will be \((a_0*b_0 \times a_1*b_1 \times \dots \times a_n*b_n)\) tensor with the following entries:

\[(\text{input} \otimes \text{other})_{k_0, k_1, \dots, k_n} = \text{input}_{i_0, i_1, \dots, i_n} * \text{other}_{j_0, j_1, \dots, j_n},\]

where \(k_t = i_t * b_t + j_t\) is \(0 \leq t \leq n\). If one tensor has fewer dimensions than the other, it will be unpacked until it has the same dimensionality.

参数:
  • t1 – The first QTensor.

  • t2 – The second QTensor.

返回:

Output QTensor .

Example:

from pyvqnet import tensor
a = tensor.arange(1,1+ 24).reshape([2,1,2,3,2])
b = tensor.arange(1,1+ 24).reshape([6,4])

c = tensor.kron(a,b)
print(c)


# [[[[[  1.   2.   3.   4.   2.   4.   6.   8.]
#     [  5.   6.   7.   8.  10.  12.  14.  16.]
#     [  9.  10.  11.  12.  18.  20.  22.  24.]
#     [ 13.  14.  15.  16.  26.  28.  30.  32.]
#     [ 17.  18.  19.  20.  34.  36.  38.  40.]
#     [ 21.  22.  23.  24.  42.  44.  46.  48.]
#     [  3.   6.   9.  12.   4.   8.  12.  16.]
#     [ 15.  18.  21.  24.  20.  24.  28.  32.]
#     [ 27.  30.  33.  36.  36.  40.  44.  48.]
#     [ 39.  42.  45.  48.  52.  56.  60.  64.]
#     [ 51.  54.  57.  60.  68.  72.  76.  80.]
#     [ 63.  66.  69.  72.  84.  88.  92.  96.]
#     [  5.  10.  15.  20.   6.  12.  18.  24.]
#     [ 25.  30.  35.  40.  30.  36.  42.  48.]
#     [ 45.  50.  55.  60.  54.  60.  66.  72.]
#     [ 65.  70.  75.  80.  78.  84.  90.  96.]
#     [ 85.  90.  95. 100. 102. 108. 114. 120.]
#     [105. 110. 115. 120. 126. 132. 138. 144.]]

#    [[  7.  14.  21.  28.   8.  16.  24.  32.]
#     [ 35.  42.  49.  56.  40.  48.  56.  64.]
#     [ 63.  70.  77.  84.  72.  80.  88.  96.]
#     [ 91.  98. 105. 112. 104. 112. 120. 128.]
#     [119. 126. 133. 140. 136. 144. 152. 160.]
#     [147. 154. 161. 168. 168. 176. 184. 192.]
#     [  9.  18.  27.  36.  10.  20.  30.  40.]
#     [ 45.  54.  63.  72.  50.  60.  70.  80.]
#     [ 81.  90.  99. 108.  90. 100. 110. 120.]
#     [117. 126. 135. 144. 130. 140. 150. 160.]
#     [153. 162. 171. 180. 170. 180. 190. 200.]
#     [189. 198. 207. 216. 210. 220. 230. 240.]
#     [ 11.  22.  33.  44.  12.  24.  36.  48.]
#     [ 55.  66.  77.  88.  60.  72.  84.  96.]
#     [ 99. 110. 121. 132. 108. 120. 132. 144.]
#     [143. 154. 165. 176. 156. 168. 180. 192.]
#     [187. 198. 209. 220. 204. 216. 228. 240.]
#     [231. 242. 253. 264. 252. 264. 276. 288.]]]]



#  [[[[ 13.  26.  39.  52.  14.  28.  42.  56.]
#     [ 65.  78.  91. 104.  70.  84.  98. 112.]
#     [117. 130. 143. 156. 126. 140. 154. 168.]
#     [169. 182. 195. 208. 182. 196. 210. 224.]
#     [221. 234. 247. 260. 238. 252. 266. 280.]
#     [273. 286. 299. 312. 294. 308. 322. 336.]
#     [ 15.  30.  45.  60.  16.  32.  48.  64.]
#     [ 75.  90. 105. 120.  80.  96. 112. 128.]
#     [135. 150. 165. 180. 144. 160. 176. 192.]
#     [195. 210. 225. 240. 208. 224. 240. 256.]
#     [255. 270. 285. 300. 272. 288. 304. 320.]
#     [315. 330. 345. 360. 336. 352. 368. 384.]
#     [ 17.  34.  51.  68.  18.  36.  54.  72.]
#     [ 85. 102. 119. 136.  90. 108. 126. 144.]
#     [153. 170. 187. 204. 162. 180. 198. 216.]
#     [221. 238. 255. 272. 234. 252. 270. 288.]
#     [289. 306. 323. 340. 306. 324. 342. 360.]
#     [357. 374. 391. 408. 378. 396. 414. 432.]]

#    [[ 19.  38.  57.  76.  20.  40.  60.  80.]
#     [ 95. 114. 133. 152. 100. 120. 140. 160.]
#     [171. 190. 209. 228. 180. 200. 220. 240.]
#     [247. 266. 285. 304. 260. 280. 300. 320.]
#     [323. 342. 361. 380. 340. 360. 380. 400.]
#     [399. 418. 437. 456. 420. 440. 460. 480.]
#     [ 21.  42.  63.  84.  22.  44.  66.  88.]
#     [105. 126. 147. 168. 110. 132. 154. 176.]
#     [189. 210. 231. 252. 198. 220. 242. 264.]
#     [273. 294. 315. 336. 286. 308. 330. 352.]
#     [357. 378. 399. 420. 374. 396. 418. 440.]
#     [441. 462. 483. 504. 462. 484. 506. 528.]
#     [ 23.  46.  69.  92.  24.  48.  72.  96.]
#     [115. 138. 161. 184. 120. 144. 168. 192.]
#     [207. 230. 253. 276. 216. 240. 264. 288.]
#     [299. 322. 345. 368. 312. 336. 360. 384.]
#     [391. 414. 437. 460. 408. 432. 456. 480.]
#     [483. 506. 529. 552. 504. 528. 552. 576.]]]]]

einsum

pyvqnet.tensor.einsum(equation, *operands)

Sum the products of the elements of the input operands along the specified dimension using a notation based on the Einstein summation convention.

备注

This function uses opt_einsum (https://optimized-einsum.readthedocs.io/en/stable/) to speed up the computation or reduce memory consumption by optimizing the contraction order. This optimization occurs when there are at least three inputs.

For more complex einsum, opt_einsum can be additionally imported to compute directly on QTensor.

参数:
  • equation – The subscript of the Einstein summation.

  • operands – The tensor on which the Einstein summation is to be computed.

返回:

The QTensor result.

Example:

from pyvqnet import tensor

vqneta = tensor.randn((3, 5, 4))
vqnetl = tensor.randn((2, 5))
vqnetr = tensor.randn((2, 4))
z = tensor.einsum('bn,anm,bm->ba',  vqnetl, vqneta,vqnetr)
print(z.shape)
#[2, 3]
vqneta = tensor.randn((20,30,40,50))
z = tensor.einsum('...ij->...ji', vqneta)
print(z.shape)
#[20, 30, 50, 40]

reciprocal

pyvqnet.tensor.reciprocal(t)

Compute the element-wise reciprocal of the QTensor.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

t = tensor.arange(1, 10, 1)
u = tensor.reciprocal(t)
print(u)

#[1, 0.5000000, 0.3333333, 0.2500000, 0.2000000, 0.1666667, 0.1428571, 0.1250000, 0.1111111]

sign

pyvqnet.tensor.sign(t)

Return a new QTensor with the signs of the elements of input.The sign function returns -1 if t < 0, 0 if t==0, 1 if t > 0.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

t = tensor.arange(-5, 5, 1)
u = tensor.sign(t)
print(u)

# [-1, -1, -1, -1, -1, 0, 1, 1, 1, 1]

neg

pyvqnet.tensor.neg(t: pyvqnet.tensor.QTensor)

Unary negation of QTensor elements.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([1, 2, 3])
x = tensor.neg(t)
print(x)

# [-1, -2, -3]

trace

pyvqnet.tensor.trace(t, k: int = 0)

Return the sum of the elements of the diagonal of the input 2-D matrix.

参数:
  • t – input 2-D QTensor

  • k – offset (0 for the main diagonal, positive for the nth diagonal above the main one, negative for the nth diagonal below the main one)

返回:

the sum of the elements of the diagonal of the input 2-D matrix

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

t = tensor.randn([4,4])
for k in range(-3, 4):
    u=tensor.trace(t,k=k)
    print(u)

# 0.07717618346214294
# -1.9287869930267334
# 0.6111435890197754
# 2.8094992637634277
# 0.6388946771621704
# -1.3400784730911255
# 0.26980453729629517

exp

pyvqnet.tensor.exp(t: pyvqnet.tensor.QTensor)

Applies exponential function to all the elements of the input QTensor.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([1, 2, 3])
x = tensor.exp(t)
print(x)

# [2.7182817, 7.3890562, 20.0855369]

acos

pyvqnet.tensor.acos(t: pyvqnet.tensor.QTensor)

Compute the element-wise inverse cosine of the QTensor.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
a = np.arange(36).reshape(2,6,3).astype(np.float32)
a =a/100
A = QTensor(a,requires_grad = True)
y = tensor.acos(A)
print(y)

# [
# [[1.5707964, 1.5607961, 1.5507950],
#  [1.5407919, 1.5307857, 1.5207754],
#  [1.5107603, 1.5007390, 1.4907107],
#  [1.4806744, 1.4706289, 1.4605733],
#  [1.4505064, 1.4404273, 1.4303349],
#  [1.4202280, 1.4101057, 1.3999666]],
# [[1.3898098, 1.3796341, 1.3694384],
#  [1.3592213, 1.3489819, 1.3387187],
#  [1.3284305, 1.3181161, 1.3077742],
#  [1.2974033, 1.2870022, 1.2765695],
#  [1.2661036, 1.2556033, 1.2450669],
#  [1.2344928, 1.2238795, 1.2132252]]
# ]

asin

pyvqnet.tensor.asin(t: pyvqnet.tensor.QTensor)

Compute the element-wise inverse sine of the QTensor.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

t = tensor.arange(-1, 1, .5)
u = tensor.asin(t)
print(u)

#[-1.5707964, -0.5235988, 0, 0.5235988]

atan

pyvqnet.tensor.atan(t: pyvqnet.tensor.QTensor)

Compute the element-wise inverse tangent of the QTensor.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

t = tensor.arange(-1, 1, .5)
u = tensor.atan(t)
print(u)

# [-0.7853981, -0.4636476, 0.0000, 0.4636476]

sin

pyvqnet.tensor.sin(t: pyvqnet.tensor.QTensor)

Applies sine function to all the elements of the input QTensor.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([1, 2, 3])
x = tensor.sin(t)
print(x)

# [0.8414709, 0.9092974, 0.1411200]

cos

pyvqnet.tensor.cos(t: pyvqnet.tensor.QTensor)

Applies cosine function to all the elements of the input QTensor.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([1, 2, 3])
x = tensor.cos(t)
print(x)

# [0.5403022, -0.4161468, -0.9899924]

tan

pyvqnet.tensor.tan(t: pyvqnet.tensor.QTensor)

Applies tangent function to all the elements of the input QTensor.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([1, 2, 3])
x = tensor.tan(t)
print(x)

# [1.5574077, -2.1850397, -0.1425465]

tanh

pyvqnet.tensor.tanh(t: pyvqnet.tensor.QTensor)

Applies hyperbolic tangent function to all the elements of the input QTensor.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([1, 2, 3])
x = tensor.tanh(t)
print(x)

# [0.7615941, 0.9640275, 0.9950547]

sinh

pyvqnet.tensor.sinh(t: pyvqnet.tensor.QTensor)

Applies hyperbolic sine function to all the elements of the input QTensor.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([1, 2, 3])
x = tensor.sinh(t)
print(x)

# [1.1752011, 3.6268603, 10.0178747]

cosh

pyvqnet.tensor.cosh(t: pyvqnet.tensor.QTensor)

Applies hyperbolic cosine function to all the elements of the input QTensor.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([1, 2, 3])
x = tensor.cosh(t)
print(x)

# [1.5430806, 3.7621955, 10.0676622]

power

pyvqnet.tensor.power(t1: pyvqnet.tensor.QTensor, t2: pyvqnet.tensor.QTensor)

Raises first QTensor to the power of second QTensor.

参数:
  • t1 – first QTensor

  • t2 – second QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t1 = QTensor([1, 4, 3])
t2 = QTensor([2, 5, 6])
x = tensor.power(t1, t2)
print(x)

# [1, 1024, 729]

abs

pyvqnet.tensor.abs(t: pyvqnet.tensor.QTensor)

Applies abs function to all the elements of the input QTensor.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([1, -2, 3])
x = tensor.abs(t)
print(x)

# [1, 2, 3]

log

pyvqnet.tensor.log(t: pyvqnet.tensor.QTensor)

Applies log (ln) function to all the elements of the input QTensor.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([1, 2, 3])
x = tensor.log(t)
print(x)

# [0, 0.6931471, 1.0986123]

log_softmax

pyvqnet.tensor.log_softmax(t, axis=-1)

Sequentially calculate the results of the softmax function and the log function on the axis axis.

参数:
  • t – input QTensor .

  • axis – The axis used to calculate softmax, the default is -1.

返回:

Output QTensor。

Example:

from pyvqnet import tensor
output = tensor.arange(1,13).reshape([3,2,2])
t = tensor.log_softmax(output,1)
print(t)
# [
# [[-2.1269281, -2.1269281],
#  [-0.1269280, -0.1269280]],
# [[-2.1269281, -2.1269281],
#  [-0.1269280, -0.1269280]],
# [[-2.1269281, -2.1269281],
#  [-0.1269280, -0.1269280]]
# ]

sqrt

pyvqnet.tensor.sqrt(t: pyvqnet.tensor.QTensor)

Applies sqrt function to all the elements of the input QTensor.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([1, 2, 3])
x = tensor.sqrt(t)
print(x)

# [1, 1.4142135, 1.7320507]

square

pyvqnet.tensor.square(t: pyvqnet.tensor.QTensor)

Applies square function to all the elements of the input QTensor.

参数:

t – input QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([1, 2, 3])
x = tensor.square(t)
print(x)
# [1, 4, 9]

eigh

pyvqnet.tensor.eigh(t: QTensor)

Returns the eigenvalues ​​and eigenvectors of a complex Hermitian (conjugate symmetric) or real symmetric matrix.

Returns two objects, a 1D array containing the eigenvalues ​​of a, and a 2D square matrix or matrix (depending on the input type) of the corresponding eigenvectors (in columns).

Param:

Input QTensor.

Param:

Eigenvalues ​​and eigenvectors of t.

返回:

Returns eigenvalues ​​and eigenvectors

Examples:

import numpy as np
import pyvqnet
from pyvqnet import tensor


def generate_random_symmetric_matrix(n):
        A = pyvqnet.tensor.randn((n, n))
        A = A + A.transpose()
        return A

n = 3
symmetric_matrix = generate_random_symmetric_matrix(n)

evs,vecs = pyvqnet.tensor.eigh(symmetric_matrix)
print(evs)
print(vecs)
# [-4.0669565,-1.9191254,-1.3642329]
# <QTensor [3] DEV_CPU kfloat32>

# [[-0.9889652, 0.0325959,-0.1445187],
#  [ 0.0912495, 0.9025176,-0.4208745],
#  [ 0.1167119,-0.4294176,-0.8955328]]
# <QTensor [3, 3] DEV_CPU kfloat32>

frobenius_norm

pyvqnet.tensor.frobenius_norm(t: QTensor, axis: int = None, keepdims=False)

Computes the F-norm of the tensor on the input QTensor along the axis set by axis , if axis is None, returns the F-norm of all elements.

参数:
  • t – Inpout QTensor .

  • axis – The axis used to find the F norm, the default is None.

  • keepdims – Whether the output tensor preserves the reduced dimensionality. The default is False.

返回:

Output a QTensor or F-norm value.

Example:

from pyvqnet import tensor,QTensor
t = QTensor([[[1., 2., 3.], [4., 5., 6.]], [[7., 8., 9.], [10., 11., 12.]],
            [[13., 14., 15.], [16., 17., 18.]]])
t.requires_grad = True
result = tensor.frobenius_norm(t, -2, False)
print(result)
# [
# [4.1231055, 5.3851647, 6.7082038],
#  [12.2065554, 13.6014709, 15],
#  [20.6155281, 22.0227146, 23.4307499]
# ]

Logic Functions

maximum

pyvqnet.tensor.maximum(t1: pyvqnet.tensor.QTensor, t2: pyvqnet.tensor.QTensor)

Element-wise maximum of two tensor.

参数:
  • t1 – first QTensor

  • t2 – second QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t1 = QTensor([6, 4, 3])
t2 = QTensor([2, 5, 7])
x = tensor.maximum(t1, t2)
print(x)

# [6, 5, 7]

minimum

pyvqnet.tensor.minimum(t1: pyvqnet.tensor.QTensor, t2: pyvqnet.tensor.QTensor)

Element-wise minimum of two tensor.

参数:
  • t1 – first QTensor

  • t2 – second QTensor

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t1 = QTensor([6, 4, 3])
t2 = QTensor([2, 5, 7])
x = tensor.minimum(t1, t2)
print(x)

# [2, 4, 3]

min

pyvqnet.tensor.min(t: pyvqnet.tensor.QTensor, axis=None, keepdims=False)

Return min elements of the input QTensor alongside given axis. if axis == None, return the min value of all elements in tensor.

参数:
  • t – input QTensor

  • axis – axis used for min, defaults to None

  • keepdims – whether the output tensor has dim retained or not. - defaults to False

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([[1, 2, 3], [4, 5, 6]])
x = tensor.min(t, axis=1, keepdims=True)
print(x)

# [
# [1],
#  [4]
# ]

max

pyvqnet.tensor.max(t: pyvqnet.tensor.QTensor, axis=None, keepdims=False)

Return max elements of the input QTensor alongside given axis. if axis == None, return the max value of all elements in tensor.

参数:
  • t – input QTensor

  • axis – axis used for max, defaults to None

  • keepdims – whether the output tensor has dim retained or not. - defaults to False

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([[1, 2, 3], [4, 5, 6]])
x = tensor.max(t, axis=1, keepdims=True)
print(x)

# [[3],
# [6]]

clip

pyvqnet.tensor.clip(t: pyvqnet.tensor.QTensor, min_val, max_val)

Clips input QTensor to minimum and maximum value.

参数:
  • t – input QTensor

  • min_val – minimum value

  • max_val – maximum value

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([2, 4, 6])
x = tensor.clip(t, 3, 8)
print(x)

# [3, 4, 6]

where

pyvqnet.tensor.where(condition: pyvqnet.tensor.QTensor, t1: pyvqnet.tensor.QTensor, t2: pyvqnet.tensor.QTensor)

Return elements chosen from x or y depending on condition.

参数:
  • condition – condition tensor,need to have data type of kbool.

  • t1 – QTensor from which to take elements if condition is met, defaults to None

  • t2 – QTensor from which to take elements if condition is not met, defaults to None

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t1 = QTensor([1, 2, 3])
t2 = QTensor([4, 5, 6])
x = tensor.where(t1 < 2, t1, t2)
print(x)

# [1, 5, 6]

nonzero

pyvqnet.tensor.nonzero(t)

Return a QTensor containing the indices of nonzero elements.

参数:

t – input QTensor

返回:

output QTensor contains indices of nonzero elements.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([[0.6, 0.0, 0.0, 0.0],
                            [0.0, 0.4, 0.0, 0.0],
                            [0.0, 0.0, 1.2, 0.0],
                            [0.0, 0.0, 0.0,-0.4]])
t = tensor.nonzero(t)
print(t)
# [
# [0, 0],
# [1, 1],
# [2, 2],
# [3, 3]
# ]

isfinite

pyvqnet.tensor.isfinite(t)

Test element-wise for finiteness (not infinity or not Not a Number).

参数:

t – input QTensor

返回:

Output QTensor, which returns True when the corresponding position element meets the condition, otherwise returns False.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

t = QTensor([1, float('inf'), 2, float('-inf'), float('nan')])
flag = tensor.isfinite(t)
print(flag)

#[ True False  True False False]

isinf

pyvqnet.tensor.isinf(t)

Test element-wise for positive or negative infinity.

参数:

t – input QTensor

返回:

Output QTensor, which returns True when the corresponding position element meets the condition, otherwise returns False.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

t = QTensor([1, float('inf'), 2, float('-inf'), float('nan')])
flag = tensor.isinf(t)
print(flag)

# [False  True False  True False]

isnan

pyvqnet.tensor.isnan(t)

Test element-wise for Nan.

参数:

t – input QTensor

返回:

Output QTensor, which returns True when the corresponding position element meets the condition, otherwise returns False.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

t = QTensor([1, float('inf'), 2, float('-inf'), float('nan')])
flag = tensor.isnan(t)
print(flag)

# [False False False False  True]

isneginf

pyvqnet.tensor.isneginf(t)

Test element-wise for negative infinity.

参数:

t – input QTensor

返回:

Output QTensor, which returns True when the corresponding position element meets the condition, otherwise returns False.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

t = QTensor([1, float('inf'), 2, float('-inf'), float('nan')])
flag = tensor.isneginf(t)
print(flag)

# [False False False  True False]

isposinf

pyvqnet.tensor.isposinf(t)

Test element-wise for positive infinity.

参数:

t – input QTensor

返回:

Output QTensor, which returns True when the corresponding position element meets the condition, otherwise returns False.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

t = QTensor([1, float('inf'), 2, float('-inf'), float('nan')])
flag = tensor.isposinf(t)
print(flag)

# [False  True False False False]

logical_and

pyvqnet.tensor.logical_and(t1, t2)

Compute the truth value of t1 and t2 element-wise.

参数:
  • t1 – input QTensor

  • t2 – input QTensor

返回:

Output QTensor, which returns True when the corresponding position element meets the condition, otherwise returns False.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

a = QTensor([0, 1, 10, 0])
b = QTensor([4, 0, 1, 0])
flag = tensor.logical_and(a,b)
print(flag)

# [False False  True False]

logical_or

pyvqnet.tensor.logical_or(t1, t2)

Compute the truth value of t1 or t2 element-wise.

参数:
  • t1 – input QTensor

  • t2 – input QTensor

返回:

Output QTensor, which returns True when the corresponding position element meets the condition, otherwise returns False.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

a = QTensor([0, 1, 10, 0])
b = QTensor([4, 0, 1, 0])
flag = tensor.logical_or(a,b)
print(flag)

# [ True  True  True False]

logical_not

pyvqnet.tensor.logical_not(t)

Compute the truth value of not t element-wise.

参数:

t – input QTensor

返回:

Output QTensor, which returns True when the corresponding position element meets the condition, otherwise returns False.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

a = QTensor([0, 1, 10, 0])
flag = tensor.logical_not(a)
print(flag)

# [ True False False  True]

logical_xor

pyvqnet.tensor.logical_xor(t1, t2)

Compute the truth value of t1 xor t2 element-wise.

参数:
  • t1 – input QTensor

  • t2 – input QTensor

返回:

Output QTensor, which returns True when the corresponding position element meets the condition, otherwise returns False.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

a = QTensor([0, 1, 10, 0])
b = QTensor([4, 0, 1, 0])
flag = tensor.logical_xor(a,b)
print(flag)

# [ True  True False False]

greater

pyvqnet.tensor.greater(t1, t2)

Return the truth value of t1 > t2 element-wise.

参数:
  • t1 – input QTensor

  • t2 – input QTensor

返回:

Output QTensor, which returns True when the corresponding position element meets the condition, otherwise returns False.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

a = QTensor([[1, 2], [3, 4]])
b = QTensor([[1, 1], [4, 4]])
flag = tensor.greater(a,b)
print(flag)

# [[False  True]
#  [False False]]

greater_equal

pyvqnet.tensor.greater_equal(t1, t2)

Return the truth value of t1 >= t2 element-wise.

参数:
  • t1 – input QTensor

  • t2 – input QTensor

返回:

Output QTensor, which returns True when the corresponding position element meets the condition, otherwise returns False.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

a = QTensor([[1, 2], [3, 4]])
b = QTensor([[1, 1], [4, 4]])
flag = tensor.greater_equal(a,b)
print(flag)

#[[ True  True]
# [False  True]]

less

pyvqnet.tensor.less(t1, t2)

Return the truth value of t1 < t2 element-wise.

参数:
  • t1 – input QTensor

  • t2 – input QTensor

返回:

Output QTensor, which returns True when the corresponding position element meets the condition, otherwise returns False.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

a = QTensor([[1, 2], [3, 4]])
b = QTensor([[1, 1], [4, 4]])
flag = tensor.less(a,b)
print(flag)

#[[False False]
# [ True False]]

less_equal

pyvqnet.tensor.less_equal(t1, t2)

Return the truth value of t1 <= t2 element-wise.

参数:
  • t1 – input QTensor

  • t2 – input QTensor

返回:

Output QTensor, which returns True when the corresponding position element meets the condition, otherwise returns False.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

a = QTensor([[1, 2], [3, 4]])
b = QTensor([[1, 1], [4, 4]])
flag = tensor.less_equal(a,b)
print(flag)

# [[ True False]
#  [ True  True]]

equal

pyvqnet.tensor.equal(t1, t2)

Return the truth value of t1 == t2 element-wise.

参数:
  • t1 – input QTensor

  • t2 – input QTensor

返回:

Output QTensor, which returns True when the corresponding position element meets the condition, otherwise returns False.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

a = QTensor([[1, 2], [3, 4]])
b = QTensor([[1, 1], [4, 4]])
flag = tensor.equal(a,b)
print(flag)

#[[ True False]
# [False  True]]

not_equal

pyvqnet.tensor.not_equal(t1, t2)

Return the truth value of t1 != t2 element-wise.

参数:
  • t1 – input QTensor

  • t2 – input QTensor

返回:

Output QTensor, which returns True when the corresponding position element meets the condition, otherwise returns False.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

a = QTensor([[1, 2], [3, 4]])
b = QTensor([[1, 1], [4, 4]])
flag = tensor.not_equal(a,b)
print(flag)


#[[False  True]
# [ True False]]

bitwise_and

pyvqnet.tensor.bitwise_and(t1, t2)

Computes the bitwise AND of two QTensor elements.

参数:
  • t1 – Input QTensor t1. Only integers or booleans are valid inputs.

  • t2 – Input QTensor t2. Only integers or booleans are valid inputs.

返回:

result QTensor

Example:

from pyvqnet.tensor import *
import numpy as np
from pyvqnet.dtype import *
powers_of_two = 1 << np.arange(14, dtype=np.int64)[::-1]
samples = tensor.QTensor([23],dtype=kint8)
samples = samples.unsqueeze(-1)
states_sampled_base_ten = samples & tensor.QTensor(powers_of_two,dtype = samples.dtype, device = samples.device)
print(states_sampled_base_ten)
#[[ 0, 0, 0, 0, 0, 0, 0, 0, 0,16, 0, 4, 2, 1]]

Matrix Operations

select

pyvqnet.tensor.select(t: pyvqnet.tensor.QTensor, index)

Return QTensor in the QTensor at the given axis. following operation get same result’s value.

参数:
  • t – input QTensor

  • index – a string contains output dim

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
t = QTensor(np.arange(1,25).reshape(2,3,4))

indx = [":", "0", ":"]
t.requires_grad = True
t.zero_grad()
ts = tensor.select(t,indx)
print(ts)
# [
# [[1., 2., 3., 4.]],
# [[13., 14., 15., 16.]]
# ]

broadcast

pyvqnet.tensor.broadcast(t1: pyvqnet.tensor.QTensor, t2: pyvqnet.tensor.QTensor)

Subject to certain restrictions, smaller arrays are placed throughout larger arrays so that they have compatible shapes. This interface can perform automatic differentiation on input parameter tensors.

Reference https://numpy.org/doc/stable/user/basics.broadcasting.html

参数:
  • t1 – input QTensor 1

  • t2 – input QTensor 2

Return t11:

with new broadcast shape t1.

Return t22:

t2 with new broadcast shape.

Example:

from pyvqnet.tensor import tensor
t1 = tensor.ones([5, 4])
t2 = tensor.ones([4])

t11, t22 = tensor.broadcast(t1, t2)

print(t11.shape)
print(t22.shape)

t1 = tensor.ones([5, 4])
t2 = tensor.ones([1])

t11, t22 = tensor.broadcast(t1, t2)

print(t11.shape)
print(t22.shape)

t1 = tensor.ones([5, 4])
t2 = tensor.ones([2, 1, 4])

t11, t22 = tensor.broadcast(t1, t2)

print(t11.shape)
print(t22.shape)


# [5, 4]
# [5, 4]
# [5, 4]
# [5, 4]
# [2, 5, 4]
# [2, 5, 4]

concatenate

pyvqnet.tensor.concatenate(args: list, axis=1)

Concatenate the input QTensor along the axis and return a new QTensor.

参数:
  • args – list consist of input QTensors

  • axis – dimension to concatenate. Has to be between 0 and the number of dimensions of concatenate tensors.

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
x = QTensor([[1, 2, 3],[4,5,6]], requires_grad=True)
y = 1-x
x = tensor.concatenate([x,y],1)
print(x)

# [
# [1, 2, 3, 0, -1, -2],
# [4, 5, 6, -3, -4, -5]
# ]

stack

pyvqnet.tensor.stack(QTensors: list, axis)

Join a sequence of arrays along a new axis,return a new QTensor.

参数:
  • QTensors – list contains QTensors

  • axis – dimension to insert. Has to be between 0 and the number of dimensions of stacked tensors.

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
R, C = 3, 4
a = np.arange(R * C).reshape(R, C).astype(np.float32)
t11 = QTensor(a)
t22 = QTensor(a)
t33 = QTensor(a)
rlt1 = tensor.stack([t11,t22,t33],2)
print(rlt1)

# [
# [[0, 0, 0],
#  [1, 1, 1],
#  [2, 2, 2],
#  [3, 3, 3]],
# [[4, 4, 4],
#  [5, 5, 5],
#  [6, 6, 6],
#  [7, 7, 7]],
# [[8, 8, 8],
#  [9, 9, 9],
#  [10, 10, 10],
#  [11, 11, 11]]
# ]

permute

pyvqnet.tensor.permute(t: pyvqnet.tensor.QTensor, dim: list)

Reverse or permute the axes of an array.if dims = None, revsers the dim.

参数:
  • t – input QTensor

  • dim – the new order of the dimensions (list of integers)

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
R, C = 3, 4
a = np.arange(R * C).reshape([2,2,3]).astype(np.float32)
t = QTensor(a)
tt = tensor.permute(t,[2,0,1])
print(tt)

# [
# [[0, 3],
#  [6, 9]],
# [[1, 4],
#  [7, 10]],
# [[2, 5],
#  [8, 11]]
# ]

transpose

pyvqnet.tensor.transpose(t: pyvqnet.tensor.QTensor, dim: list)

Transpose the axes of an array.if dim = None, reverse the dim. This function is same as permute.

参数:
  • t – input QTensor

  • dim – the new order of the dimensions (list of integers)

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
R, C = 3, 4
a = np.arange(R * C).reshape([2,2,3]).astype(np.float32)
t = QTensor(a)
tt = tensor.transpose(t,[2,0,1])
print(tt)

# [
# [[0, 3],
#  [6, 9]],
# [[1, 4],
#  [7, 10]],
# [[2, 5],
#  [8, 11]]
# ]

tile

pyvqnet.tensor.tile(t: pyvqnet.tensor.QTensor, reps: list)

Construct a QTensor by repeating QTensor the number of times given by reps.

If reps has length d, the result QTensor will have dimension of max(d, t.ndim).

If t.ndim < d, t is expanded to be d-dimensional by inserting new axes from start dimension. So a shape (3,) array is promoted to (1, 3) for 2-D replication, or shape (1, 1, 3) for 3-D replication.

If t.ndim > d, reps is expanded to t.ndim by inserting 1’s to it.

Thus for an t of shape (2, 3, 4, 5), a reps of (4, 3) is treated as (1, 1, 4, 3).

参数:
  • t – input QTensor

  • reps – the number of repetitions per dimension.

返回:

a new QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor

import numpy as np
a = np.arange(6).reshape(2,3).astype(np.float32)
A = QTensor(a)
reps = [2,2]
B = tensor.tile(A,reps)
print(B)

# [
# [0, 1, 2, 0, 1, 2],
# [3, 4, 5, 3, 4, 5],
# [0, 1, 2, 0, 1, 2],
# [3, 4, 5, 3, 4, 5]
# ]

squeeze

pyvqnet.tensor.squeeze(t: pyvqnet.tensor.QTensor, axis: int = -1)

Remove axes of length one .

参数:
  • t – input QTensor

  • axis – squeeze axis,if axis = -1 ,squeeze all the dimensions that have size of 1.

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
a = np.arange(6).reshape(1,6,1).astype(np.float32)
A = QTensor(a)
AA = tensor.squeeze(A,0)
print(AA)

# [
# [0],
# [1],
# [2],
# [3],
# [4],
# [5]
# ]

unsqueeze

pyvqnet.tensor.unsqueeze(t: pyvqnet.tensor.QTensor, axis: int = 0)

Return a new QTensor with a dimension of size one inserted at the specified position.

参数:
  • t – input QTensor

  • axis – unsqueeze axis,which will insert dimension.

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
a = np.arange(24).reshape(2,1,1,4,3).astype(np.float32)
A = QTensor(a)
AA = tensor.unsqueeze(A,1)
print(AA)

# [
# [[[[[0, 1, 2],
#  [3, 4, 5],
#  [6, 7, 8],
#  [9, 10, 11]]]]],
# [[[[[12, 13, 14],
#  [15, 16, 17],
#  [18, 19, 20],
#  [21, 22, 23]]]]]
# ]

moveaxis

pyvqnet.tensor.moveaxis(t, source: int, destination: int)

Move dimensions of t from positions in source to positions in destination.

Other dimensions of t that are not explicitly moved retain their original order and appear at positions not specified in destination.

参数:
  • t – Input QTensor.

  • source – (integer or tuple of integers) The original positions of the dimensions to be moved. These positions must be unique.

  • destination – (integer or tuple of integers) The destination positions for each original dimension. These positions must also be unique.

返回:

New QTensor

Example:

from pyvqnet import QTensor,tensor
a = tensor.arange(0,24).reshape((2,3,4))
b = tensor.moveaxis(a,(1, 2), (0, 1))
print(b.shape)

swapaxis

pyvqnet.tensor.swapaxis(t, axis1: int, axis2: int)

Interchange two axes of an array.The given dimensions axis1 and axis2 are swapped.

参数:
  • t – input QTensor

  • axis1 – First axis.

  • axis2 – Destination position for the original axis. These must also be unique

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
a = np.arange(24).reshape(2,3,4).astype(np.float32)
A = QTensor(a)
AA = tensor.swapaxis(A,2,1)
print(AA)

# [
# [[0, 4, 8],
#  [1, 5, 9],
#  [2, 6, 10],
#  [3, 7, 11]],
# [[12, 16, 20],
#  [13, 17, 21],
#  [14, 18, 22],
#  [15, 19, 23]]
# ]

masked_fill

pyvqnet.tensor.masked_fill(t, mask, value)

If mask == 1, fill with the specified value. The shape of the mask must be broadcastable from the shape of the input QTensor.

参数:
  • t – input QTensor

  • mask – A QTensor

  • value – specified value

返回:

A QTensor

Examples:

from pyvqnet.tensor import tensor
import numpy as np
a = tensor.ones([2, 2, 2, 2])
mask = np.random.randint(0, 2, size=4).reshape([2, 2])
b = tensor.QTensor(mask==1)
c = tensor.masked_fill(a, b, 13)
print(c)
# [
# [[[1, 1],
#  [13, 13]],
# [[1, 1],
#  [13, 13]]],
# [[[1, 1],
#  [13, 13]],
# [[1, 1],
#  [13, 13]]]
# ]

flatten

pyvqnet.tensor.flatten(t: pyvqnet.tensor.QTensor, start: int = 0, end: int = -1)

Flatten QTensor from dim start to dim end.

参数:
  • t – input QTensor

  • start – dim start,default = 0,start from first dim.

  • end – dim end,default = -1,end with last dim.

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
t = QTensor([1, 2, 3])
x = tensor.flatten(t)
print(x)

# [1, 2, 3]

reshape

pyvqnet.tensor.reshape(t: pyvqnet.tensor.QTensor, new_shape)

Change QTensor’s shape, return a new shape QTensor

参数:
  • t – input QTensor.

  • new_shape – new shape

返回:

a new shape QTensor.

Example:

from pyvqnet.tensor import tensor
from pyvqnet.tensor import QTensor
import numpy as np
R, C = 3, 4
a = np.arange(R * C).reshape(R, C).astype(np.float32)
t = QTensor(a)
reshape_t = tensor.reshape(t, [C, R])
print(reshape_t)
# [
# [0, 1, 2],
# [3, 4, 5],
# [6, 7, 8],
# [9, 10, 11]
# ]

flip

pyvqnet.tensor.flip(t, flip_dims)

Reverses the QTensor along the specified axis, returning a new tensor.

参数:
  • t – Input QTensor.

  • flip_dims – The axis or list of axes to flip.

返回:

Output QTensor.

Example:

from pyvqnet import tensor
t = tensor.arange(1, 3 * 2 *2 * 2 + 1).reshape([3, 2, 2, 2])
t.requires_grad = True
y = tensor.flip(t, [0, -1])
print(y)
# [
# [[[18, 17],
#  [20, 19]],
# [[22, 21],
#  [24, 23]]],
# [[[10, 9],
#  [12, 11]],
# [[14, 13],
#  [16, 15]]],
# [[[2, 1],
#  [4, 3]],
# [[6, 5],
#  [8, 7]]]
# ]

gather

pyvqnet.tensor.gather(t, dim, index)

Collect values along the axis specified by ‘dim’.

For 3-D tensors, the output is specified by:

\[ \begin{align}\begin{aligned}\begin{split}out[i][j][k] = t[index[i][j][k]][j][k] , if dim == 0 \\\end{split}\\\begin{split}out[i][j][k] = t[i][index[i][j][k]][k] , if dim == 1 \\\end{split}\\\begin{split}out[i][j][k] = t[i][j][index[i][j][k]] , if dim == 2 \\\end{split}\end{aligned}\end{align} \]
参数:
  • t – Input QTensor.

  • dim – The aggregation axis.

  • index – Index QTensor, should have the same dimension size as input.

返回:

the aggregated result

Example:

from pyvqnet.tensor import gather,QTensor,tensor
import numpy as np
np.random.seed(25)
npx = np.random.randn( 3, 4,6)
npindex = np.array([2,3,1,2,1,2,3,0,2,3,1,2,3,2,0,1]).reshape([2,2,4]).astype(np.int64)

x1 = QTensor(npx)
indices1 =  QTensor(npindex)
x1.requires_grad = True
y1 = gather(x1,1,indices1)
y1.backward(tensor.arange(0,y1.numel()).reshape(y1.shape))

print(y1)
# [
# [[2.1523438, -0.4196777, -2.0527344, -1.2460938],
#  [-0.6201172, -1.3349609, 2.2949219, -0.5913086]],
# [[0.2170410, -0.7055664, 1.6074219, -1.9394531],
#  [0.2430420, -0.6333008, 0.5332031, 0.3881836]]
# ]

scatter

pyvqnet.tensor.scatter(input, dim, index, src)

Writes all values in the tensor src to input at the indices specified in the indices tensor.

For 3-D tensors, the output is specified by:

\[\begin{split}input[indices[i][j][k]][j][k] = src[i][j][k] , if dim == 0 \\ input[i][indices[i][j][k]][k] = src[i][j][k] , if dim == 1 \\ input[i][j][indices[i][j][k]] = src[i][j][k] , if dim == 2 \\\end{split}\]
参数:
  • input – Input QTensor.

  • dim – Scatter axis.

  • indices – Index QTensor, should have the same dimension size as the input.

  • src – The source tensor to scatter.

Example:

from pyvqnet.tensor import scatter, QTensor
import numpy as np
np.random.seed(25)
npx = np.random.randn(3, 2, 4, 2)
npindex = np.array([2, 3, 1, 2, 1, 2, 3, 0, 2, 3, 1, 2, 3, 2, 0,
                    1]).reshape([2, 2, 4, 1]).astype(np.int64)
x1 = QTensor(npx)
npsrc = QTensor(np.full_like(npindex, 200), dtype=x1.dtype)
npsrc.requires_grad = True
indices1 = QTensor(npindex)
y1 = scatter(x1, 2, indices1, npsrc)
print(y1)

# [[[[  0.2282731   1.0268903]
#    [200.         -0.5911815]
#    [200.         -0.2223257]
#    [200.          1.8379046]]

#   [[200.          0.8685831]
#    [200.         -0.2323119]
#    [200.         -1.3346615]
#    [200.         -1.2460893]]]


#  [[[  1.2022723  -1.0499416]
#    [200.         -0.4196777]
#    [200.         -2.5944874]
#    [200.          0.6808889]]

#   [[200.         -1.9762536]
#    [200.         -0.2908697]
#    [200.          1.9826261]
#    [200.         -1.839905 ]]]


#  [[[  1.6076708   0.3882919]
#    [  0.3997321   0.4054766]
#    [  0.2170018  -0.6334391]
#    [  0.2466215  -1.9395455]]

#   [[  0.1140596  -1.8853414]
#    [  0.2430805  -0.7054807]
#    [  0.3646276  -0.5029522]
#    [ -0.2257515  -0.5655377]]]]

broadcast_to

pyvqnet.tensor.broadcast_to(t, ref)

Subject to certain constraints, the array t is “broadcast” to the reference shape so that they have compatible shapes.

https://numpy.org/doc/stable/user/basics.broadcasting.html

参数:
  • t – input QTensor

  • ref – Reference shape.

返回:

The QTensor of the newly broadcasted t.

Example:

from pyvqnet.tensor.tensor import QTensor
from pyvqnet.tensor import *
ref = [2,3,4]
a = ones([4])
b = tensor.broadcast_to(a,ref)
print(b.shape)
#[2, 3, 4]

dense_to_csr

pyvqnet.tensor.dense_to_csr(t)

Convert dense matrix to CSR format sparse matrix, only supports 2 dimensions.

参数:

t – input dense QTensor

返回:

CSR sparse matrix

Example:

from pyvqnet.tensor import QTensor,dense_to_csr

a = QTensor([[2, 3, 4, 5]])
b = dense_to_csr(a)
print(b.csr_members())
#([0,4], [0,1,2,3], [2,3,4,5])

csr_to_dense

pyvqnet.tensor.csr_to_dense(t)

Convert CSR format sparse matrix to dense matrix, only supports 2 dimensions.

参数:

t – input CSR sparse matrix

返回:

Dense QTensor

Example:

from pyvqnet.tensor import QTensor,dense_to_csr,csr_to_dense

a = QTensor([[2, 3, 4, 5]])
b = dense_to_csr(a)
c = csr_to_dense(b)
print(c)
#[[2,3,4,5]]

Utility Functions

to_tensor

pyvqnet.tensor.to_tensor(x)

Convert input array to Qtensor if it isn’t already.

参数:

x – integer,float or numpy.array

返回:

output QTensor

Example:

from pyvqnet.tensor import tensor
t = tensor.to_tensor(10.0)
print(t)
# [10]

pad_sequence

pyvqnet.tensor.pad_sequence(qtensor_list, batch_first=False, padding_value=0)

Pad a list of variable-length tensors with padding_value. pad_sequence stacks lists of tensors along new dimensions and pads them to equal length. The input is a sequence of lists of size L x *. L is variable length.

参数:
  • qtensor_listlist[QTensor] - list of variable length sequences.

  • batch_first – ‘bool’ - If true, the output will be batch size x longest sequence length x *, otherwise longest sequence length x batch size x *. Default: False.

  • padding_value – ‘float’ - padding value. Default value: 0.

返回:

If batch_first is False, the tensor size is batch size x longest sequence length x *. Otherwise the size of the tensor is longest sequence length x batch size x *.

Examples:

from pyvqnet.tensor import tensor
a = tensor.ones([4, 2,3])
b = tensor.ones([1, 2,3])
c = tensor.ones([2, 2,3])
a.requires_grad = True
b.requires_grad = True
c.requires_grad = True
y = tensor.pad_sequence([a, b, c], True)

print(y)
# [
# [[[1, 1, 1],
#  [1, 1, 1]],
# [[1, 1, 1],
#  [1, 1, 1]],
# [[1, 1, 1],
#  [1, 1, 1]],
# [[1, 1, 1],
#  [1, 1, 1]]],
# [[[1, 1, 1],
#  [1, 1, 1]],
# [[0, 0, 0],
#  [0, 0, 0]],
# [[0, 0, 0],
#  [0, 0, 0]],
# [[0, 0, 0],
#  [0, 0, 0]]],
# [[[1, 1, 1],
#  [1, 1, 1]],
# [[1, 1, 1],
#  [1, 1, 1]],
# [[0, 0, 0],
#  [0, 0, 0]],
# [[0, 0, 0],
#  [0, 0, 0]]]
# ]

pad_packed_sequence

pyvqnet.tensor.pad_packed_sequence(sequence, batch_first=False, padding_value=0, total_length=None)

Pad a batch of packed variable-length sequences. It is the inverse of pack_pad_sequence. When batch_first is True, it returns a tensor of shape B x T x *, otherwise it returns T x B x *. Where T is the longest sequence length and B is the batch size.

参数:
  • sequence – ‘QTensor’ - the data to be processed.

  • batch_first – ‘bool’ - If True, batch will be the first dimension of the input. Default value: False.

  • padding_value – ‘bool’ - padding value. Default: 0.

  • total_length – ‘bool’ - If not None, the output will be padded to length total_length. Default: None.

返回:

A tuple of tensors containing the padded sequences, and a list of lengths for each sequence in the batch. Batch elements will be reordered in their original order.

Examples:

from pyvqnet.tensor import tensor
a = tensor.ones([4, 2,3])
b = tensor.ones([2, 2,3])
c = tensor.ones([1, 2,3])
a.requires_grad = True
b.requires_grad = True
c.requires_grad = True
y = tensor.pad_sequence([a, b, c], True)
seq_len = [4, 2, 1]
data = tensor.pack_pad_sequence(y,
                        seq_len,
                        batch_first=True,
                        enforce_sorted=True)

seq_unpacked, lens_unpacked = tensor.pad_packed_sequence(data, batch_first=True)
print(seq_unpacked)
# [[[[1. 1. 1.]
#    [1. 1. 1.]]

#   [[1. 1. 1.]
#    [1. 1. 1.]]

#   [[1. 1. 1.]
#    [1. 1. 1.]]

#   [[1. 1. 1.]
#    [1. 1. 1.]]]


#  [[[1. 1. 1.]
#    [1. 1. 1.]]

#   [[1. 1. 1.]
#    [1. 1. 1.]]

#   [[0. 0. 0.]
#    [0. 0. 0.]]

#   [[0. 0. 0.]
#    [0. 0. 0.]]]


#  [[[1. 1. 1.]
#    [1. 1. 1.]]

#   [[0. 0. 0.]
#    [0. 0. 0.]]

#   [[0. 0. 0.]
#    [0. 0. 0.]]

#   [[0. 0. 0.]
#    [0. 0. 0.]]]]
print(lens_unpacked)
# [4, 2, 1]

pack_pad_sequence

pyvqnet.tensor.pack_pad_sequence(input, lengths, batch_first=False, enforce_sorted=True)

Pack a Tensor containing variable-length padded sequences. If batch_first is True, input should have shape [batch size, length,*], otherwise shape [length, batch size,*].

For unsorted sequences, use enforce_sorted is False. If enforce_sorted is True, sequences should be sorted in descending order by length.

参数:
  • input – ‘QTensor’ - variable-length sequence batches for padding.

  • batch_first – ‘bool’ - if True, the input is expected to be B x T x * format, default: False.

  • enforce_sorted – ‘bool’ - if True, the input should be Contains sequences in descending order of length. If False, the input will be sorted unconditionally. Default: True.

Parma lengths:

‘list’ - list of sequence lengths for each batch element.

返回:

A PackedSequence object.

Examples:

from pyvqnet.tensor import tensor
a = tensor.ones([4, 2,3])
c = tensor.ones([1, 2,3])
b = tensor.ones([2, 2,3])
a.requires_grad = True
b.requires_grad = True
c.requires_grad = True
y = tensor.pad_sequence([a, b, c], True)
seq_len = [4, 2, 1]
data = tensor.pack_pad_sequence(y,
                        seq_len,
                        batch_first=True,
                        enforce_sorted=False)
print(data.data)

# [[[1. 1. 1.]
#   [1. 1. 1.]]

#  [[1. 1. 1.]
#   [1. 1. 1.]]

#  [[1. 1. 1.]
#   [1. 1. 1.]]

#  [[1. 1. 1.]
#   [1. 1. 1.]]

#  [[1. 1. 1.]
#   [1. 1. 1.]]

#  [[1. 1. 1.]
#   [1. 1. 1.]]

#  [[1. 1. 1.]
#   [1. 1. 1.]]]

print(data.batch_sizes)
# [3, 2, 1, 1]

functional_conv2d

pyvqnet.nn.functional.functional_conv2d(x, weight, bias, stride=(1, 1), padding=(0, 0), dilation=(1, 1), groups=1)

Performs a 2D convolution on an input image consisting of multiple input planes.

参数:
  • x – 4D input tensor.

  • weight – 4D kernel tensor.

  • weight – 4D kernel tensor.

  • stridetuple - stride, defaults to (1, 1)

  • padding – Padding, controls the amount of padding on the input. This can be a string {‘valid’, ‘same’} or a tuple of integers specifying the amount of implicit padding to apply to the input, defaulting to (0,0).

  • dilation_ratetuple - Spacing between kernel elements. Default: (0,0)

  • groupint - Number of groups. Default value: 1

返回:

qtensor

Examples:

from pyvqnet.nn.functional import functional_conv2d
from pyvqnet.tensor import arange,ones
from pyvqnet import kfloat32
from pyvqnet.nn import Module,Parameter


classTM(Module):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.w = ones([5,4,2,2])
        self.w.requires_grad = True
        self.b = ones([5,])
        self.b.requires_grad = True

    def forward(self,x):
        weight, bias, = self.w, self.b
        return functional_conv2d(x, weight, bias)


x = arange(0,7*4*12*12,dtype=kfloat32).reshape([7,4,12,12])
l = TM()
y = l(x)

y.backward( )

no_grad

pyvqnet.no_grad()

Log backpropagation nodes when forward computation is disabled.

Example:

import pyvqnet.tensor as tensor
from pyvqnet import no_grad

with no_grad():
    x = tensor.QTensor([1.0, 2.0, 3.0],requires_grad=True)
    y = tensor.tan(x)
    y.backward()
#RuntimeError: output requires_grad is False.