你好,这篇文章咱们讨论一下关于「为什么选择Pytorch来实现声音分类算法」的事情...
为什么选择PyTorch来实现声音分类算法
声音分类算法是一项很受关注的任务,它可以在很多应用场景中得到应用,例如可视化系统、自动驾驶系统等。在声音分类中,我们的目标是将输入的语音数据分类为特定的类别。由于音频数据与图像数据、文本数据等略有不同,因此需要使用不同的模型进行处理。在本篇文章中,我们将讨论为什么我们选择PyTorch作为实现声音分类算法的框架。
灵活性
PyTorch是一个灵活的框架,可应对不同的神经网络结构。其使用动态计算图而非静态计算图的方式,使得我们可以灵活地移动和修改网络结构,同时支持GPU训练和测试。PyTorch还提供了丰富的工具来帮助我们构建各种类型的神经网络。在声音分类算法中,我们需要用到循环神经网络(RNN)和卷积神经网络(CNN),PyTorch正是一个支持这些网络类型的框架。
代码可读性
PyTorch的代码非常易于阅读和编写。由于采用的是Python作为编程语言,因此非常适合那些熟练掌握Python的用户。此外,PyTorch提供的API非常人性化,每个API的用途都显而易见。这使得我们能够快速了解每个功能的作用,同时也能够快速组合这些功能来实现所需的模型。在声音分类算法中,我们需要使用许多不同的技术来处理音频数据,例如傅立叶变换、时频分析等技术,PyTorch中提供的API使得我们能够轻松地实现这些功能。
快速迭代和调试
PyTorch具有快速迭代和调试的优势。由于PyTorch使用动态计算图,因此我们可以像传统的Python程序一样实时调试模型,这使得调试过程更加迅速和直观。此外,PyTorch的灵活性也使得我们能够快速地构建和修改模型,从而快速迭代。这对于我们不断尝试不同的网络结构和参数设置是非常重要的。
社区支持
最后,PyTorch具有强大的社区支持。由于PyTorch是一个相对较新的框架,因此它的社区支持尚处于蓬勃发展的阶段。这意味着我们可以从社区中获得大量的资源、示例、工具和帮助来解决我们所遇到的问题。此外,大量的研究人员和工程师也在使用PyTorch来研究和实现各种应用程序,他们可以在社区中分享他们的成果和经验。
总结
在本篇文章中,我们讨论了为什么选择PyTorch来实现声音分类算法。我们看到PyTorch提供了灵活性、代码易读性、快速迭代和调试以及强大的社区支持等优势。在声音分类算法中,这些优势使得我们能够快速实现并迭代不同的神经网络结构和参数设置。因此,我们相信PyTorch是一种非常合适的框架,用于实现声音分类算法。
大家都在看:
[2022-12-17]神经网络与深度学习第5章 - 循环神经网络(part 1)
contents
循环神经网络 part 1 - RNN记忆能力实验
写在开头
正如我们在前面的作业中提到的,循环神经网络是一种具有记忆能力的神经网络结构。就像数字电路中的时序逻辑电路一样,它的输出不仅取决于输入值,还取决于前一刻输入的值。神经元不仅可以接受来自其他神经元的信息,还可以接受自己的信息,形成一个环路的网络结构。这种记忆功能更接近生物神经元,在许多情况下具有更优越的性能。目前,循环神经网络已被广泛应用于语音识别、语言模型、自然语言生成等诸多任务中。在本实验中,对简单循环神经网络进行了研究和测试。
[En]
As we mentioned in the previous assignment, cyclic neural network is a kind of neural network structure with memory ability. just like a sequential logic circuit in a digital circuit-its output depends not only on the value of the input, but also on the value entered at the previous moment. Neurons can not only receive information from other neurons, but also accept their own information, forming a network structure with loops. This kind of memory function is closer to biological neurons and has more superior performance in many cases. At present, cyclic neural network has been widely used in speech recognition, language model, natural language generation and many other tasks. In this experiment, the simple cyclic neural network is studied and tested.
维基百科介绍如下:
查了一下,FNN多指有门控状态的反馈神经网络,尽管其也有前馈神经网络的意思,但是不常用。
; 循环神经网络的记忆能力实验
简单循环网络在参数学习时存在长程依赖问题,很难建模长时间间隔(Long Range)的状态之间的依赖关系。为了测试简单循环网络的记忆能力,本节构建一个数字求和任务进行实验。
数字求和任务的输入是一系列数字,前两个位置的数字是0-9,其余的是随机生成的(主要是0)。预测目标是输入序列中前两位数字的总和。如果序列长度越长,准确率越高,网络的记忆能力就越好。因此,我们可以构建不同长度的数据集,并通过验证简单环网在不同长度的数据集上的性能来测试简单环网的长期依赖能力。以下是一个数据集示例:
[En]
The input of the digital summation task is a series of digits, the numbers of the first two positions are 0-9, and the rest are randomly generated (mainly 0). The prediction goal is the sum of the first two digits in the input sequence. If the longer the sequence length is, the higher the accuracy is, the better the memory ability of the network is. Therefore, we can build data sets of different lengths and test the long-range dependence ability of simple loop networks by verifying the performance of simple loop networks on data sets of different lengths. An example dataset is as follows:
数据集构建
由于在此任务中,输入序列的前两位数为0−9,且组合的数量是固定的,因此您可以枚举前两位数的所有组合,并在默认情况下以0填充到固定长度。然而,考虑到数据的多样性,生成的数字序列中的零位置被随机采样并随机替换为0-9的数量,以增加样本数。
[En]
Because in this task, the first two digits of the input sequence are 0 − 9, and the number of combinations is fixed, you can enumerate all the combinations of the first two digits and fill them with 0 to a fixed length by default. However, considering the diversity of data, the zero position in the generated digital sequence is randomly sampled and randomly replaced with the number of 0-9 to increase the number of samples.
数据集构建函数
我们可以通过设置kk的数值来指定一条样本随机生成的数字序列数量。当生成某个指定长度的数据集时,会同时生成训练集、验证集和测试集。当k=3时,生成训练集。当k=1时,生成验证集和测试集。代码实现如下:
import random
import numpy as np
random.seed(0)
np.random.seed(0)
def generate_data(length, k, save_path):
if length < 3:
raise ValueError("The length of data should be greater than 2.")
if k == 0:
raise ValueError("k should be greater than 0.")
base_examples = []
for n1 in range(0, 10):
for n2 in range(0, 10):
seq = [n1, n2] + [0] * (length - 2)
label = n1 + n2
base_examples.append((seq, label))
examples = []
for base_example in base_examples:
for _ in range(k):
idx = np.random.randint(2, length)
val = np.random.randint(0, 10)
seq = base_example[0].copy()
label = base_example[1]
seq[idx] = val
examples.append((seq, label))
with open(save_path, "w", encoding="utf-8") as f:
for example in examples:
seq = [str(e) for e in example[0]]
label = str(example[1])
line = " ".join(seq) + "\t" + label + "\n"
f.write(line)
print(f"generate data to: {save_path}.")
lengths = [5, 10, 15, 20, 25, 30, 35]
for length in lengths:
save_path = f"datasets/{length}/train.txt"
k = 3
generate_data(length, k, save_path)
save_path = f"datasets/{length}/eval.txt"
k = 1
generate_data(length, k, save_path)
save_path = f"datasets/{length}/test.txt"
k = 1
generate_data(length, k, save_path)
可见运行效果如下:
数据集加载
数据集加载的代码如下:
import os
def load_data(data_path):
train_examples = []
train_path = os.path.join(data_path, "train.txt")
with open(train_path, "r", encoding="utf-8") as f:
for line in f.readlines():
items = line.strip().split("\t")
seq = [int(i) for i in items[0].split(" ")]
label = int(items[1])
train_examples.append((seq, label))
eval_examples = []
eval_path = os.path.join(data_path, "eval.txt")
with open(eval_path, "r", encoding="utf-8") as f:
for line in f.readlines():
items = line.strip().split("\t")
seq = [int(i) for i in items[0].split(" ")]
label = int(items[1])
eval_examples.append((seq, label))
test_examples = []
test_path = os.path.join(data_path, "test.txt")
with open(test_path, "r", encoding="utf-8") as f:
for line in f.readlines():
items = line.strip().split("\t")
seq = [int(i) for i in items[0].split(" ")]
label = int(items[1])
test_examples.append((seq, label))
return train_examples, eval_examples, test_examples
length = 5
data_path = f"datasets/{length}"
train_examples, eval_examples, test_examples = load_data(data_path)
print("训练集数量:", len(train_examples))
print("验证集数量:", len(eval_examples))
print("测试集数量:", len(test_examples))
可以看到结果如下:
构建 Dataset类
为了方便数据集的加载和使用,我们继承torch的Dataset类进行数据集构建:
import torch
from torch.utils.data import Dataset
class DigitSumDataset(Dataset):
def __init__(self, data):
self.data = data
def __getitem__(self, idx):
example = self.data[idx]
seq = torch.tensor(example[0], dtype=torch.int64)
label = torch.tensor(example[1], dtype=torch.int64)
return seq, label
def __len__(self):
return len(self.data)
模型构建
简单循环神经网络模型如下图所示:
[En]
The simple cyclic neural network model is shown in the following figure:
整个模型由以下部分组成:
[En]
The whole model consists of the following parts:
(1) 嵌入层:将输入的数字序列进行向量化,即将每个数字映射为向量;
(2) SRN 层:接收向量序列,更新循环单元,将最后时刻的隐状态作为整个序列的表示;
(3) 输出层:一个线性层,输出分类的结果。
; 嵌入层
为了更好地表示数字,您需要将数字映射到嵌入的向量。嵌入向量中的每个维度都可以用来描述数字本身的某些特征。由于向量可以表达更多的数字信息,使用向量进行数字求和任务可以使模型具有更强的拟合能力。
[En]
In order to better represent numbers, you need to map the numbers to an embedded vector. Each dimension in the embedded vector can be used to describe some characteristic of the number itself. Because the vector can express more information of the number, using the vector to carry on the digital summation task can make the model have stronger fitting ability.
首先我们构建一个嵌入矩阵E ∈ R 10 × M E\in \mathbb{R}^{10\times M}E ∈R 10 ×M,其中第i行对应数字i的嵌入向量,每个嵌入向量的维度是M。给定一个组数字序列S ∈ R B × L S\in \mathbb{R}^{B\times L}S ∈R B ×L,其中B为批大小,L为序列长度,可以通过查表将其映射为嵌入表示X ∈ R B × L × M X\in \mathbb{R}^{B\times L\times M}X ∈R B ×L ×M。
或者,每个数字可以表示为10维唯一热编码向量,可以使用如下矩阵运算来嵌入该热编码向量:
[En]
Alternatively, each number can be represented as a 10-dimensional unique hot coding vector, which can be embedded using matrix operations as follows:
X = S ′ E X=S'E X =S ′E
其中S ′ ∈ R B × L × 10 S'\in \mathbb{R}^{B\times L \times 10}S ′∈R B ×L ×10。
因此,Embedded Layer类的实现如下:
[En]
As a result, the implementation of the embedded layer class is as follows:
import torch.nn as nn
class Embedding(nn.Module):
def __init__(self, num_embeddings, embedding_dim):
super(Embedding, self).__init__()
W_attr = torch.randn([num_embeddings, embedding_dim])
W_attr = torch.nn.init.xavier_uniform_(torch.as_tensor(W_attr, dtype=torch.float32), gain=1.0)
self.W = torch.nn.Parameter(W_attr)
def forward(self, inputs):
embs = self.W[inputs]
return embs
emb_layer = Embedding(10, 5)
inputs = torch.tensor([0, 1, 2, 3])
emb_layer(inputs)
SRN层
有了这一层的嵌入层,我们就可以很容易地得到模型的建筑代码。
[En]
With the bedding of the embedded layer in this layer, we can get the building code of the model very easily.
自己实现
import torch
import torch.nn as nn
import torch.nn.functional as F
torch.manual_seed(0)
class SRN(nn.Module):
def __init__(self, input_size, hidden_size, W_attr=None, U_attr=None, b_attr=None):
super(SRN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
W_attr = torch.randn([input_size, hidden_size])
W_attr = torch.nn.init.xavier_uniform_(torch.as_tensor(W_attr, dtype=torch.float32), gain=1.0)
U_attr = torch.randn([hidden_size, hidden_size])
U_attr = torch.nn.init.xavier_uniform_(torch.as_tensor(U_attr, dtype=torch.float32), gain=1.0)
b_attr = torch.randn([1, hidden_size])
b_attr = torch.nn.init.xavier_uniform_(torch.as_tensor(b_attr, dtype=torch.float32), gain=1.0)
self.W = torch.nn.Parameter(W_attr)
self.U = torch.nn.Parameter(U_attr)
self.b = torch.nn.Parameter(b_attr)
def init_state(self, batch_size):
hidden_state = torch.zeros([batch_size, self.hidden_size], dtype=torch.float32)
return hidden_state
def forward(self, inputs, hidden_state=None):
batch_size, seq_len, input_size = inputs.shape
if hidden_state is None:
hidden_state = self.init_state(batch_size)
for step in range(seq_len):
step_input = inputs[:, step, :]
hidden_state = F.tanh(torch.matmul(step_input, self.W) + torch.matmul(hidden_state, self.U) + self.b)
return hidden_state
W_attr = torch.nn.Parameter(torch.tensor([[0.1, 0.2], [0.1,0.2]]))
U_attr = torch.nn.Parameter(torch.tensor([[0.0, 0.1], [0.1,0.0]]))
b_attr = torch.nn.Parameter(torch.tensor([[0.1, 0.1]]))
srn = SRN(2, 2, W_attr=W_attr, U_attr=U_attr, b_attr=b_attr)
inputs = torch.tensor([[[1, 0],[0, 2]]], dtype=torch.float32)
hidden_state = srn(inputs)
print("hidden_state", hidden_state)
结果如下:
torch框架实现
代码如下:
batch_size, seq_len, input_size = 8, 20, 32
inputs = torch.randn(size=[batch_size, seq_len, input_size])
hidden_size = 32
paddle_srn = nn.RNN(input_size, hidden_size)
self_srn = SRN(input_size, hidden_size)
self_hidden_state = self_srn(inputs)
paddle_outputs, paddle_hidden_state = paddle_srn(inputs)
print("self_srn hidden_state: ", self_hidden_state.shape)
print("torch_srn outpus:", paddle_outputs.shape)
print("torch_srn hidden_state:", paddle_hidden_state.shape)
结果如下:
比较
由于自己实现的SRN没有考虑多层的因素,所以比torch版本少了层次维度,因此其输出shape为[8, 32]。同时由于在以上代码使用pytorch内置API实例化SRN时,默认定义的是1层的单向SRN,因此其shape为[1, 8, 32],同时隐状态向量为[8,20, 32]。
接下来是对模型精度的比较,两个结果如下:
[En]
Next is the comparison of the accuracy of the model, the two results are as follows:
可见精度比较接近。
另外,由于底层是由C++构建,torch自带的算子必然比我们构建的要快上很多。
; 线性层
线性层比较简单,我们直接使用torch.nn.Linear算子即可。
模型汇总
模型总体代码如下:
class Model_RNN4SeqClass(nn.Module):
def __init__(self, model, num_digits, input_size, hidden_size, num_classes):
super(Model_RNN4SeqClass, self).__init__()
self.rnn_model = model
self.num_digits = num_digits
self.input_size = input_size
self.embedding = Embedding(num_digits, input_size)
self.linear = nn.Linear(hidden_size, num_classes)
def forward(self, inputs):
inputs_emb = self.embedding(inputs)
hidden_state = self.rnn_model(inputs_emb)
logits = self.linear(hidden_state)
return logits
srn = SRN(4, 5)
model = Model_RNN4SeqClass(srn, 10, 4, 5, 19)
inputs = torch.tensor([[1, 2, 3], [2, 3, 4]])
logits = model(inputs)
print(logits)
最后一个位置的隐藏状态向量通过简单的模型计算得到。这里的测试结果如下:
[En]
The hidden state vector of the last position is obtained by simple model calculation. The test results here are as follows:
模型训练
训练指定长度的数字预测模型
我们这边使用之前构造的Runner类能够很方便地进行模型训练和构造:
import os
import random
import torch
import numpy as np
from notawheel import Accuracy, RunnerV3
num_epochs = 500
lr = 0.001
num_digits = 10
input_size = 32
hidden_size = 32
num_classes = 19
batch_size = 8
save_dir = "./checkpoints"
def train(length):
print(f"\n====> Training SRN with data of length {length}.")
np.random.seed(0)
random.seed(0)
torch.manual_seed(0)
data_path = f"datasets/{length}"
train_examples, eval_examples, test_examples = load_data(data_path)
train_set, eval_set, test_set = DigitSumDataset(train_examples), DigitSumDataset(eval_examples), DigitSumDataset(test_examples)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size)
eval_loader = torch.utils.data.DataLoader(eval_set, batch_size=batch_size)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=batch_size)
base_model = SRN(input_size, hidden_size)
model = Model_RNN4SeqClass(base_model, num_digits, input_size, hidden_size, num_classes)
optimizer = torch.optim.Adam(model.parameters(), lr)
metric = Accuracy()
loss_fn = nn.CrossEntropyLoss()
runner = RunnerV3(model, optimizer, loss_fn, metric)
model_save_path = os.path.join(save_dir, f"best_srn_model_{length}.pt")
runner.train(train_loader, eval_loader, num_epochs=num_epochs, eval_steps=100, log_steps=100, save_path=model_save_path)
return runner
部分运行结果如下,准确率为25%:
[En]
Some of the running results are as follows, with an accuracy of 25%:
模型训练时的损失如下:
模型评价
模型评价代码如下:
srn_eval_scores = []
srn_test_scores = []
for length in lengths:
print(f"Evaluate SRN with data length {length}.")
runner = srn_runners[length]
model_path = os.path.join(save_dir, f"best_srn_model_{length}.pt")
runner.load_model(model_path)
data_path = f"./datasets/{length}"
train_examples, eval_examples, test_examples = load_data(data_path)
test_set = DigitSumDataset(test_examples)
test_loader = DataLoader(test_set, batch_size=batch_size)
score, _ = runner.evaluate(test_loader)
srn_test_scores.append(score)
srn_eval_scores.append(max(runner.eval_scores))
for length, eval_score, test_score in zip(lengths, srn_eval_scores, srn_test_scores):
print(f"[SRN] length:{length}, eval_score: {eval_score}, test_score: {test_score: .5f}")
输出结果如下:
我们将精度可视化,结果如下:
[En]
We visualize the accuracy, and the results are as follows:
写在最后
通过本次实验,我们了解了简单的循环神经网络结构。通过实现SRN并进行实验,测试了循环神经网络的记忆能力。通过构建嵌入层、SRN等最终得到模型。尽管效果不是很好,但是基本的思路非常清晰。
Original: https://blog.csdn.net/LupnisJ/article/details/127930021
Author: 三工修
Title: [2022-12-17]神经网络与深度学习第5章 - 循环神经网络(part 1)