一、over roll archi
Alexnet——2012年——标志性网络
该网络的亮点在于:
(1)首次利用GPU进行网络加速训练。
(2)使用了ReLU激活函数,而不是传统的Sigmoid激活函数以及Tanh激活函数。(smoid求导比较麻烦而且当网路比较深的时候会出现梯度消失)
(3)使用了LRN局部响应归一化。
(4)在全连接层的前两层中使用了Dropout随机失活神经元操作,以减少过拟合。
dropout解释:使用dropout后,在每一层中随机失活一些神经元——减少训练参数从而减少over fitting
; 二、结构详解
1. Conv1
; 2. Maxpool1
3. Conv2
; 4. Maxpool2
5. Conv3
; 6. Conv4
7. Conv5
; 8. Maxpool3
9. FC1
描述一下: 这里使用4096个神经元,对256个大小为6 _6特征图,进行一个全链接,也就是将6_6大小的特征图,进行卷积变为一个特征点,然后对于4096个神经元中的一个点,是由256个特征图中某些个特征图卷积之后得到的特征点乘以相应的权重之后,再加上一个偏置得到. 再进行一个dropout随机从4096个节点中丢掉一些节点信息(也就是值清0),然后就得到新的4096个神经元.
; 10. FC2
和fc1类似.
11. FC3
采用的是1000个神经元,然后对fc7中4096个神经元进行全链接,然后会通过高斯过滤器,得到1000个float型的值,也就是我们所看到的预测的可能性,
如果是训练模型的话,会通过标签label进行对比误差,然后求解出残差,再通过链式求导法则,将残差通过求解偏导数逐步向上传递,并将权重进行推倒更改,类似与BP网络思虑,然后会逐层逐层的调整权重以及偏置.
; 10. summary
三、tensorflow实现
model:
tensorflow实现:
import tensorflow as tf
def alexnet(x, keep_prob, num_classes):
with tf.name_scope('conv1') as scope:
kernel = tf.Variable(tf.truncated_normal([11, 11, 3, 96], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(x, kernel, [1, 4, 4, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[96], dtype=tf.float32),
trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope)
with tf.name_scope('lrn1') as scope:
lrn1 = tf.nn.local_response_normalization(conv1,
alpha=1e-4,
beta=0.75,
depth_radius=2,
bias=2.0)
with tf.name_scope('pool1') as scope:
pool1 = tf.nn.max_pool(lrn1,
ksize=[1, 3, 3, 1],
strides=[1, 2, 2, 1],
padding='VALID')
with tf.name_scope('conv2') as scope:
pool1_groups = tf.split(axis=3, value = pool1, num_or_size_splits = 2)
kernel = tf.Variable(tf.truncated_normal([5, 5, 48, 256], dtype=tf.float32,
stddev=1e-1), name='weights')
kernel_groups = tf.split(axis=3, value = kernel, num_or_size_splits = 2)
conv_up = tf.nn.conv2d(pool1_groups[0], kernel_groups[0], [1,1,1,1], padding='SAME')
conv_down = tf.nn.conv2d(pool1_groups[1], kernel_groups[1], [1,1,1,1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),
trainable=True, name='biases')
biases_groups = tf.split(axis=0, value=biases, num_or_size_splits=2)
bias_up = tf.nn.bias_add(conv_up, biases_groups[0])
bias_down = tf.nn.bias_add(conv_down, biases_groups[1])
bias = tf.concat(axis=3, values=[bias_up, bias_down])
conv2 = tf.nn.relu(bias, name=scope)
with tf.name_scope('lrn2') as scope:
lrn2 = tf.nn.local_response_normalization(conv2,
alpha=1e-4,
beta=0.75,
depth_radius=2,
bias=2.0)
with tf.name_scope('pool2') as scope:
pool2 = tf.nn.max_pool(lrn2,
ksize=[1, 3, 3, 1],
strides=[1, 2, 2, 1],
padding='VALID')
with tf.name_scope('conv3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 384],
dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[384], dtype=tf.float32),
trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv3 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv4') as scope:
conv3_groups = tf.split(axis=3, value=conv3, num_or_size_splits=2)
kernel = tf.Variable(tf.truncated_normal([3, 3, 192, 384],
dtype=tf.float32,
stddev=1e-1), name='weights')
kernel_groups = tf.split(axis=3, value=kernel, num_or_size_splits=2)
conv_up = tf.nn.conv2d(conv3_groups[0], kernel_groups[0], [1, 1, 1, 1], padding='SAME')
conv_down = tf.nn.conv2d(conv3_groups[1], kernel_groups[1], [1,1,1,1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[384], dtype=tf.float32),
trainable=True, name='biases')
biases_groups = tf.split(axis=0, value=biases, num_or_size_splits=2)
bias_up = tf.nn.bias_add(conv_up, biases_groups[0])
bias_down = tf.nn.bias_add(conv_down, biases_groups[1])
bias = tf.concat(axis=3, values=[bias_up,bias_down])
conv4 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv5') as scope:
conv4_groups = tf.split(axis=3, value=conv4, num_or_size_splits=2)
kernel = tf.Variable(tf.truncated_normal([3, 3, 192, 256],
dtype=tf.float32,
stddev=1e-1), name='weights')
kernel_groups = tf.split(axis=3, value=kernel, num_or_size_splits=2)
conv_up = tf.nn.conv2d(conv4_groups[0], kernel_groups[0], [1, 1, 1, 1], padding='SAME')
conv_down = tf.nn.conv2d(conv4_groups[1], kernel_groups[1], [1,1,1,1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),
trainable=True, name='biases')
biases_groups = tf.split(axis=0, value=biases, num_or_size_splits=2)
bias_up = tf.nn.bias_add(conv_up, biases_groups[0])
bias_down = tf.nn.bias_add(conv_down, biases_groups[1])
bias = tf.concat(axis=3, values=[bias_up,bias_down])
conv5 = tf.nn.relu(bias, name=scope)
with tf.name_scope('pool5') as scope:
pool5 = tf.nn.max_pool(conv5,
ksize=[1, 3, 3, 1],
strides=[1, 2, 2, 1],
padding='VALID',)
with tf.name_scope('flattened6') as scope:
flattened = tf.reshape(pool5, shape=[-1, 6*6*256])
with tf.name_scope('fc6') as scope:
weights = tf.Variable(tf.truncated_normal([6*6*256, 4096],
dtype=tf.float32,
stddev=1e-1), name='weights')
biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32),
trainable=True, name='biases')
bias = tf.nn.xw_plus_b(flattened, weights, biases)
fc6 = tf.nn.relu(bias)
with tf.name_scope('dropout6') as scope:
dropout6 = tf.nn.dropout(fc6, keep_prob)
with tf.name_scope('fc7') as scope:
weights = tf.Variable(tf.truncated_normal([4096,4096],
dtype=tf.float32,
stddev=1e-1), name='weights')
biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32),
trainable=True, name='biases')
bias = tf.nn.xw_plus_b(dropout6, weights, biases)
fc7 = tf.nn.relu(bias)
with tf.name_scope('dropout7') as scope:
dropout7 = tf.nn.dropout(fc7, keep_prob)
with tf.name_scope('fc8') as scope:
weights = tf.Variable(tf.truncated_normal([4096, num_classes],
dtype=tf.float32,
stddev=1e-1), name='weights')
biases = tf.Variable(tf.constant(0.0, shape=[num_classes], dtype=tf.float32),
trainable=True, name='biases')
fc8 = tf.nn.xw_plus_b(dropout7, weights, biases)
return fc8
keras实现:(v1是是使用keras function api的方法搭建、v2是subclasses也就是子类的方法搭建-类似pytorch)
from tensorflow.keras import layers, models, Model, Sequential
def AlexNet_v1(im_height=224, im_width=224, num_classes=1000):
input_image = layers.Input(shape=(im_height, im_width, 3), dtype="float32")
x = layers.ZeroPadding2D(((1, 2), (1, 2)))(input_image)
x = layers.Conv2D(48, kernel_size=11, strides=4, activation="relu")(x)
x = layers.MaxPool2D(pool_size=3, strides=2)(x)
x = layers.Conv2D(128, kernel_size=5, padding="same", activation="relu")(x)
x = layers.MaxPool2D(pool_size=3, strides=2)(x)
x = layers.Conv2D(192, kernel_size=3, padding="same", activation="relu")(x)
x = layers.Conv2D(192, kernel_size=3, padding="same", activation="relu")(x)
x = layers.Conv2D(128, kernel_size=3, padding="same", activation="relu")(x)
x = layers.MaxPool2D(pool_size=3, strides=2)(x)
x = layers.Flatten()(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense(2048, activation="relu")(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense(2048, activation="relu")(x)
x = layers.Dense(num_classes)(x)
predict = layers.Softmax()(x)
model = models.Model(inputs=input_image, outputs=predict)
return model
class AlexNet_v2(Model):
def __init__(self, num_classes=1000):
super(AlexNet_v2, self).__init__()
self.features = Sequential([
layers.ZeroPadding2D(((1, 2), (1, 2))),
layers.Conv2D(48, kernel_size=11, strides=4, activation="relu"),
layers.MaxPool2D(pool_size=3, strides=2),
layers.Conv2D(128, kernel_size=5, padding="same", activation="relu"),
layers.MaxPool2D(pool_size=3, strides=2),
layers.Conv2D(192, kernel_size=3, padding="same", activation="relu"),
layers.Conv2D(192, kernel_size=3, padding="same", activation="relu"),
layers.Conv2D(128, kernel_size=3, padding="same", activation="relu"),
layers.MaxPool2D(pool_size=3, strides=2)])
self.flatten = layers.Flatten()
self.classifier = Sequential([
layers.Dropout(0.2),
layers.Dense(1024, activation="relu"),
layers.Dropout(0.2),
layers.Dense(128, activation="relu"),
layers.Dense(num_classes),
layers.Softmax()
])
def call(self, inputs, **kwargs):
x = self.features(inputs)
x = self.flatten(x)
x = self.classifier(x)
return x
-
[En]
-
pytorch实现-reference:
https://github.com/dansuh17/alexnet-pytorch
https://github.com/sloth2012/AlexNet
Original: https://blog.csdn.net/weixin_39589455/article/details/122362687
Author: 别出BUG求求了
Title: Alexnet详解以及tesnsorflow实现alexnet;什么是alexnet alexnet能做什么;alexnet教程
相关阅读
Title: Pandas常用累计、同比、环比等统计方法实践案例
统计表中常常以本年累计、上年同期(累计)、当期(例如当月)完成、上月完成为统计数据,并进行同比、环比分析。如下月报统计表所示样例,本文将使用Python Pandas工具进行统计。
其中:
- (本年)累计:是指本年1月到截止月份的合计数
- (上年)同期(累计):是指去年1月到与本年累计所对应截止月份的合计数
- 同比(增长率)=(本期数-同期数)/同期数*100%
- 环比(增长率)=(本期数-上期数)/上期数*100%
注:这里的本期是指本月完成或当月完成,上期数是指上月完成。
示例数据:
注:为了演示方便,本案例数据源仅使用2年,且每年5个月的数据。
; 1. (本年)累计
在统计分析的发展中,按年和按月积累一些统计数据是相当常见的。对于数据,就是按规则逐行累计数据。
[En]
In the development of statistical analysis, it is quite common to accumulate some statistical data on an annual and monthly basis. For data, it is to accumulate data row by row according to the rules.
Pandas中的cumsum()函数可以实现按某时间维度累计需求。
import pandas as pd
df = pd.read_csv('data2021.csv')
cum_columns_name = ['cum_churncount','cum_newcount']
df[cum_columns_name] = df[['years','churncount','newcount']].groupby(['years']).cumsum()
注:其中分组'years'是指年度时间维度累计。
计算结果如下:
2. (上年)同期累计
对于(上年)同期累计,将直接取上一年度累计值的同月份数据。pandas DataFrame.shift()函数可以把数据移动指定的行数。
接续上列,读取同期数据。首先是把'yearmonth'上移五行,如上图所示得到新的DataFrame,通过'yearmonth'进行两表数据关联(左关联:左侧为原表,右侧为移动后的新表),实现去同期数据效果。
cum_columns_dict = {'cum_churncount':'cum_same_period_churncount',
'cum_newcount':'cum_same_period_newcount'}
df_cum_same_period = df[['cum_churncount','cum_newcount','yearmonth']].copy()
df_cum_same_period = df_cum_same_period.rename(columns=cum_columns_dict)
df_cum_same_period.loc[:,'yearmonth'] = df_cum_same_period['yearmonth'].shift(-5)
df = pd.merge(left=df,right=df_cum_same_period,on='yearmonth',how='left')
3. 上月(完成)
取上月的数据,使用pandas DataFrame.shift()函数把数据移动指定的行数。
接续上列,读取上期数据。(与取同期原理一样,略)
last_mnoth_columns_dict = {'churncount':'last_month_churncount',
'newcount':'last_month_newcount'}
df_last_month = df[['churncount','newcount','yearmonth']].copy()
df_last_month = df_last_month.rename(columns=last_mnoth_columns_dict)
df_last_month.loc[:,'yearmonth'] = df_last_month['yearmonth'].shift(-1)
df = pd.merge(left=df,right=df_last_month,on='yearmonth',how='left')
4. 同比(增长率)
计算同比涉及到除法,需要剔除除数为零的数据。
df.fillna(0,inplace=True)
df.loc[df['cum_same_period_churncount']!=0,'cum_churncount_rat'] = (df['cum_churncount']-df['cum_same_period_churncount'])/df['cum_same_period_churncount']
df.loc[df['cum_same_period_newcount']!=0,'cum_newcount_rat'] = (df['cum_newcount']-df['cum_same_period_newcount'])/df['cum_same_period_newcount']
df[['yearmonth','cum_churncount','cum_newcount','cum_same_period_churncount','cum_same_period_newcount','cum_churncount_rat','cum_newcount_rat']]
5. 环比(增长率)
df.loc[df['last_month_churncount']!=0,'churncount_rat'] = (df['churncount']-df['last_month_churncount'])/df['last_month_churncount']
df.loc[df['last_month_newcount']!=0,'newcount_rat'] = (df['newcount']-df['last_month_newcount'])/df['last_month_newcount']
df[['yearmonth','churncount','newcount','last_month_churncount','last_month_newcount','churncount_rat','newcount_rat']]
6. 总结
pandas做统计计算功能方法比较多,这里总结用到的技术有累计cumsum()函数、移动数据shift()函数、表合并关联merge()函数,以及通过loc条件修改数据。
Original: https://blog.csdn.net/xiaoyw/article/details/122979421
Author: 肖永威
Title: Pandas常用累计、同比、环比等统计方法实践案例

安装tensorflow-gpu-1.14.0+cuda10.0+cudnn7.4.0+python3.7

《MATLAB语音信号分析与合成(第二版)》:第4章 语音信号的线性预测分析

OpenCV 基础方法,Caffe,TensorFlow模型加载

记录Tensorflow-Gpu配置Mask_RCNN
![[论文阅读]PIT](https://www.itcode1024.com/wp-content/themes/begin/prune.php?src=https://www.itcode1024.com/wp-content/themes/begin/img/loading.png&w=280&h=210&a=&zc=1)
[论文阅读]PIT

评价指标reacll@10,mrr@10,ndcg@10,hit@10的含义

win11+虚拟机VMware+win10+Anaconda+Tensorflow

【语音识别】玩转语音识别 2 知识补充

安装tensorflow过程中的报错

目标检测 Chapter1 传统目标检测方法

使用K-means算法进行聚类分析

DistMult 论文笔记

小啾带你开天眼 之 人脸检测与识别(以及华强、皇叔、高祖配墨镜特效)【Python-Open_CV系列(十三)】

110配线架打法图解_【布线经验】110语音配线架详细安装教程(图文)
