[Python]-sklearn模块-机器学习Python入门《Python机器学习手册》-04-处理数值型数据

人工智能114

这本书类似于工具书或者字典,对于python具体代码的调用和使用场景写的很清楚,感觉虽然是工具书,但是对照着做一遍应该可以对机器学习中python常用的这些库有更深入的理解,在应用中也能更为熟练。

以下是根据书上的代码进行实操,注释基本写明了每句代码的作用(写在本句代码之前)和print的输出结果(写在print之后)。不一定严格按照书上内容进行,根据代码运行时具体情况稍作顺序调整,也加入了一些自己的理解。

如果你把它复制到你自己的环境中,再次运行输出,我相信你的理解会更深入、更清晰。

[En]

If you copy it to your own environment and run the output again, I believe your understanding will be deeper and clearer.

博客中的每个代码块代表一个完整的运行结果,可以直接复制和运行。

[En]

Each code block in the blog represents a complete run result, which can be copied and run directly.

包括:

主要是 sklearn模块,对数值特征处理的一些应用。

04-1 特征缩放

from sklearn import preprocessing
import numpy as np

# 创建特征
feature = np.array([[-500.5], [-100.1], [0], [100.1], [900.9]])
print(feature)
# [[-500.5]
#  [-100.1]
#  [   0. ]
#  [ 100.1]
#  [ 900.9]]

# --创建缩放器,归一化,特征的最小值和最大值分别赋予0和1
minmax_scale = preprocessing.MinMaxScaler(feature_range = (0, 1))
# 缩放特征
scaled_feature = minmax_scale.fit_transform(feature)
print(scaled_feature)
# [[0.        ]
#  [0.28571429]
#  [0.35714286]
#  [0.42857143]
#  [1.        ]]
# 输出平均值,标准差
print(scaled_feature.mean())
print(scaled_feature.std())
# 0.41428571428571426
# 0.32701494692170274

# --创建缩放器,标准化,平均值为0,标准差为1
scaler = preprocessing.StandardScaler()
# 标准化特征
scaled_feature = scaler.fit_transform(feature)
print(scaled_feature)
# [[-1.26687088]
#  [-0.39316683]
#  [-0.17474081]
#  [ 0.0436852 ]
#  [ 1.79109332]]
# 输出平均值,标准差
print(scaled_feature.mean())
print(scaled_feature.std())
# 0.0
# 1.0

# --创建缩放器,缩放有离群值的数据
scaler = preprocessing.RobustScaler()
# 标准化特征
scaled_feature = scaler.fit_transform(feature)
print(scaled_feature)
# [[-2.5]
#  [-0.5]
#  [ 0. ]
#  [ 0.5]
#  [ 4.5]]
# 输出平均值,标准差
print(scaled_feature.mean())
print(scaled_feature.std())
# 0.4
# 2.2891046284519194

04-2 归一化观察值

特征缩放的区别在于:特征缩放以整体所有特征为单位进行计算,观察值以样本(行)为单位进行计算。

from sklearn.preprocessing import Normalizer
import numpy as np

# 创建特征矩阵
feature = np.array([[0.5, 0.5], [1.1, 3.4], [1.5, 20.2], [1.63, 34.4], [10.9, 3.3]])
print(feature)
# [[ 0.5   0.5 ]
#  [ 1.1   3.4 ]
#  [ 1.5  20.2 ]
#  [ 1.63 34.4 ]
#  [10.9   3.3 ]]

# 创建归一化器,L2范数
normalizer = Normalizer(norm = 'l2')
# 转换特征矩阵
print(normalizer.transform(feature))
# [[0.70710678 0.70710678]
#  [0.30782029 0.95144452]
#  [0.07405353 0.99725427]
#  [0.04733062 0.99887928]
#  [0.95709822 0.28976368]]

# 创建归一化器,L1范数
normalizer = Normalizer(norm = 'l1')
# 转换特征矩阵
print(normalizer.transform(feature))
# [[0.5        0.5       ]
#  [0.24444444 0.75555556]
#  [0.06912442 0.93087558]
#  [0.04524008 0.95475992]
#  [0.76760563 0.23239437]]

# 创建归一化器,最大值归一化
normalizer = Normalizer(norm = 'max')
# 转换特征矩阵
print(normalizer.transform(feature))
# [[1.         1.        ]
#  [0.32352941 1.        ]
#  [0.07425743 1.        ]
#  [0.04738372 1.        ]
#  [1.         0.30275229]]

04-3 多项式特征和交互特征

  • 创建 多项式特征,解决特征与目标是非线性关系的问题
  • 创建 交互特征,解决目标由多个特征决定的问题
from sklearn.preprocessing import PolynomialFeatures
import numpy as np

# 创建特征矩阵
features = np.array([[2, 3], [2, 3], [2, 3]])
print(features)
# [[2 3]
#  [2 3]
#  [2 3]]

# 创建PolynomialFeatures对象
polynomial_interaction = PolynomialFeatures(degree = 2, include_bias = False)
# --创建多项式特征,解决特征与目标是非线性关系的问题,degree是最高阶数
# x1, x2, x1^2, x1*x2, x2^2
print(polynomial_interaction.fit_transform(features))
# [[2. 3. 4. 6. 9.]
#  [2. 3. 4. 6. 9.]
#  [2. 3. 4. 6. 9.]]
polynomial_interaction = PolynomialFeatures(degree = 3, include_bias = False)
# degree = 3,最大值为原特征最大值的三次方
print(polynomial_interaction.fit_transform(features))
# [[ 2.  3.  4.  6.  9.  8. 12. 18. 27.]
#  [ 2.  3.  4.  6.  9.  8. 12. 18. 27.]
#  [ 2.  3.  4.  6.  9.  8. 12. 18. 27.]]

interaction = PolynomialFeatures(degree = 2, interaction_only = True, include_bias = False)
# --创建交互特征,解决目标由多个特征决定的问题,degree是最高阶数
# # x1, x2, x1*x2
print(interaction.fit_transform(features))
# [[2. 3. 6.]
#  [2. 3. 6.]
#  [2. 3. 6.]]

04-4 自定义特征转换

有时需要按照自己的需求转换特征,比如求特征的对数。可以通过函数转换器 FunctionTransformer()或者pandas中的 apply()方法两种方式达到自定义特征转换的目的。

from sklearn.preprocessing import FunctionTransformer
import numpy as np

# 创建特征矩阵
features = np.array([[2, 3], [2, 3], [2, 3]])
print(features)
# [[2 3]
#  [2 3]
#  [2 3]]

# 自定义函数
def add_ten(x):
    return x + 10

# 创建转换器
ten_transformer = FunctionTransformer(add_ten)
print(ten_transformer.transform(features))
# [[12 13]
#  [12 13]
#  [12 13]]

# 同样可以采用pandas来转换
import pandas as pd

df = pd.DataFrame(features, columns = ['feature_1', 'feature_2'])
print(df.apply(add_ten))
#    feature_1  feature_2
# 0         12         13
# 1         12         13
# 2         12         13

04-5 异常值

from sklearn.covariance import EllipticEnvelope
from sklearn.datasets import make_blobs
import numpy as np

# 创建聚类的模拟数据集
features,_ = make_blobs(n_samples = 10, n_features = 2, centers = 1, random_state = 1)
print(features)
# [[-1.83198811  3.52863145]
#  [-2.76017908  5.55121358]
#  [-1.61734616  4.98930508]
#  [-0.52579046  3.3065986 ]
#  [ 0.08525186  3.64528297]
#  [-0.79415228  2.10495117]
#  [-1.34052081  4.15711949]
#  [-1.98197711  4.02243551]
#  [-2.18773166  3.33352125]
#  [-0.19745197  2.34634916]]

# 替换极端值
features[0,1] = 10000
features[1,1] = 10000
print(features)
# [[-1.83198811e+00  1.00000000e+04]
#  [-2.76017908e+00  1.00000000e+04]
#  [-1.61734616e+00  4.98930508e+00]
#  [-5.25790464e-01  3.30659860e+00]
#  [ 8.52518583e-02  3.64528297e+00]
#  [-7.94152277e-01  2.10495117e+00]
#  [-1.34052081e+00  4.15711949e+00]
#  [-1.98197711e+00  4.02243551e+00]
#  [-2.18773166e+00  3.33352125e+00]
#  [-1.97451969e-01  2.34634916e+00]]

# ----方法一:EllipticEnvelope()
# 创建异常值识别器,污染指数contamination是异常值的比例
outlier_detector = EllipticEnvelope(contamination = .1)
# 拟合识别器
outlier_detector.fit(features)
# 预测异常值
print(outlier_detector.predict(features))
# [-1  1  1  1  1  1  1  1  1  1]
# 修改污染指数
outlier_detector = EllipticEnvelope(contamination = .3)
# 拟合识别器
outlier_detector.fit(features)
# 预测异常值
print(outlier_detector.predict(features))
# [-1 -1  1  1 -1  1  1  1  1  1]

# ----方法二:四分位差IQR识别
# 也可以只查看某个特征的异常值,采用四分位差IQR识别
# IQR = 第一个四分位数和第三个四分位数的差值
# 异常值常常被定义为比第一个四分位数小1.5个IQR,或比第三个四分位数大1.5个IQR的值
feature = features[:,1]
print(feature)
# [1.00000000e+04 1.00000000e+04 4.98930508e+00 3.30659860e+00
#  3.64528297e+00 2.10495117e+00 4.15711949e+00 4.02243551e+00
#  3.33352125e+00 2.34634916e+00]

# 创建通过四分位差IQR识别法,返回异常值下标的函数
def indicies_of_outliers(x):
    q1, q3 = np.percentile(x, [25, 75])
    iqr = q3 - q1
    lower_bound = q1 - (iqr * 1.5)
    upper_bound = q3 + (iqr * 1.5)
    return np.where((x > upper_bound) | (x < lower_bound))

# &#x8BC6;&#x522B;&#x5F02;&#x5E38;&#x503C;&#x4E0B;&#x6807;
print(indicies_of_outliers(feature))
# (array([0, 1]),)

# ----&#x5904;&#x7406;&#x5F02;&#x5E38;&#x503C;
# -----&#x65B9;&#x6CD5;&#x4E00;&#xFF1A;&#x91C7;&#x7528;RobustScaler()&#x7F29;&#x653E;&#x542B;&#x6709;&#x79BB;&#x7FA4;&#x503C;&#x7684;&#x7279;&#x5F81;
from sklearn import preprocessing
scaler = preprocessing.RobustScaler()
scaled_feature = scaler.fit_transform(features)
print(scaled_feature)
# [[-2.61212566e-01  6.80970487e+03]
#  [-9.47948061e-01  6.80970487e+03]
#  [-1.02406616e-01  7.87126291e-01]
#  [ 7.05196630e-01 -3.59186642e-01]
#  [ 1.15728512e+00 -1.28464128e-01]
#  [ 5.06645267e-01 -1.17778692e+00]
#  [ 1.02406616e-01  2.20215119e-01]
#  [-3.72184092e-01  1.28464128e-01]
#  [-5.24414566e-01 -3.40846083e-01]
#  [ 9.48122608e-01 -1.01333897e+00]]

# -----&#x65B9;&#x6CD5;&#x4E8C;&#xFF1A;&#x5206;&#x6790;&#x7279;&#x5F81;&#x503C;&#x7684;&#x6210;&#x56E0;&#xFF0C;&#x9488;&#x5BF9;&#x6027;&#x5904;&#x7406;
import pandas as pd

# &#x521B;&#x5EFA;&#x6570;&#x636E;&#x5E27;
houses = pd.DataFrame()
houses['Price'] = [534433, 392333, 293222, 4322032]
houses['Bathrooms'] = [2, 3.5, 2, 116] # &#x5367;&#x5BA4;&#x6570;&#x91CF;&#xFF1F;
houses['Square_Feet'] = [1500, 2500, 1500, 48000]
print(houses)
#      Price  Bathrooms  Square_Feet
# 0   534433        2.0         1500
# 1   392333        3.5         2500
# 2   293222        2.0         1500
# 3  4322032      116.0        48000

# &#x53EF;&#x4EE5;&#x901A;&#x8FC7;&#x5DF2;&#x77E5;&#x6761;&#x4EF6;&#x76F4;&#x63A5;&#x7B5B;&#x9009;&#x7684;&#x65B9;&#x5F0F;&#x6765;&#x7B5B;&#x9009;&#x89C2;&#x5BDF;&#x503C;
print(houses[houses['Bathrooms'] < 20])
#     Price  Bathrooms  Square_Feet
# 0  534433        2.0         1500
# 1  392333        3.5         2500
# 2  293222        2.0         1500

# &#x6216;&#x8005;&#x628A;&#x4ED6;&#x4EEC;&#x6807;&#x8BB0;&#x4E3A;&#x5F02;&#x5E38;&#x503C;&#xFF0C;&#x5E76;&#x4F5C;&#x4E3A;&#x6570;&#x636E;&#x96C6;&#x7684;&#x4E00;&#x4E2A;&#x7279;&#x5F81;
houses['Outlier'] = np.where(houses['Bathrooms'] < 20, 0, 1)
print(houses)
#      Price  Bathrooms  Square_Feet  Outlier
# 0   534433        2.0         1500        0
# 1   392333        3.5         2500        0
# 2   293222        2.0         1500        0
# 3  4322032      116.0        48000        1

# &#x5BF9;&#x5F02;&#x5E38;&#x503C;&#x8FDB;&#x884C;&#x8F6C;&#x6362;&#xFF0C;&#x964D;&#x4F4E;&#x5F02;&#x5E38;&#x503C;&#x7684;&#x5F71;&#x54CD;
# &#x5BF9;&#x7279;&#x5F81;&#x53D6;&#x5BF9;&#x6570;&#x503C;
houses['log_of_square_feet'] = [np.log(x) for x in houses['Square_Feet']]
print(houses)
#      Price  Bathrooms  Square_Feet  Outlier  log_of_square_feet
# 0   534433        2.0         1500        0            7.313220
# 1   392333        3.5         2500        0            7.824046
# 2   293222        2.0         1500        0            7.313220
# 3  4322032      116.0        48000        1           10.778956

04-6 离散化与分组

from sklearn.preprocessing import Binarizer
import numpy as np

age = np.array([[6], [12], [20], [36], [65]])

# -- &#x65B9;&#x6CD5;&#x4E00;&#xFF1A;&#x4E24;&#x4E2A;&#x533A;&#x95F4;&#xFF0C;&#x4E8C;&#x503C;&#x5316;
# &#x521B;&#x5EFA;&#x4E8C;&#x503C;&#x5316;&#x5668;
binarizer = Binarizer(18)
# &#x4E8C;&#x503C;&#x5316;&#x7279;&#x5F81;
print(binarizer.fit_transform(age))
# [[0]
#  [0]
#  [1]
#  [1]
#  [1]]

# -- &#x65B9;&#x6CD5;&#x4E8C;&#xFF1A;&#x591A;&#x4E2A;&#x533A;&#x95F4;&#xFF0C;&#x79BB;&#x6563;&#x5316;
# &#x5C06;&#x7279;&#x5F81;&#x79BB;&#x6563;&#x5316;&#xFF0C;bins&#x662F;&#x533A;&#x95F4;&#x5217;&#x8868;&#xFF0C;&#x843D;&#x5728;&#x7B2C;i(0-n)&#x4E2A;&#x533A;&#x95F4;&#xFF0C;&#x8FD4;&#x56DE;&#x7684;&#x503C;&#x5C31;&#x662F;i
print(np.digitize(age, bins = [18]))
# [[0]
#  [0]
#  [1]
#  [1]
#  [1]]
print(np.digitize(age, bins = [20, 30, 64]))
# [[0]
#  [0]
#  [1]
#  [2]
#  [3]]

# -- &#x65B9;&#x6CD5;&#x4E09;&#xFF1A;&#x65E0;&#x663E;&#x5F0F;&#x5173;&#x7CFB;&#x8054;&#xFF0C;&#x805A;&#x7C7B;&#x5206;&#x7EC4;
import pandas as pd
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans

# &#x521B;&#x5EFA;&#x6A21;&#x62DF;&#x7684;&#x77E9;&#x9635;&#x7279;&#x5F81;
features, _ = make_blobs(n_samples = 50, n_features = 2, centers = 3, random_state = 1)
print(features[:5])
# [[-9.87755355 -3.33614544]
#  [-7.28721033 -8.35398617]
#  [-6.94306091 -7.0237442 ]
#  [-7.44016713 -8.79195851]
#  [-6.64138783 -8.07588804]]
# &#x521B;&#x5EFA;&#x6570;&#x636E;&#x5E27;
dataframe = pd.DataFrame(features, columns = ['feature_1', 'feature_2'])
print(dataframe.head(5))
#    feature_1  feature_2
# 0  -9.877554  -3.336145
# 1  -7.287210  -8.353986
# 2  -6.943061  -7.023744
# 3  -7.440167  -8.791959
# 4  -6.641388  -8.075888

# &#x521B;&#x5EFA;K-Means&#x805A;&#x7C7B;&#x5668;
clusterer = KMeans(3, random_state = 0)
# &#x5C06;&#x805A;&#x7C7B;&#x5E94;&#x7528;&#x5728;&#x7279;&#x5F81;&#x4E0A;
clusterer.fit(features)
# &#x9884;&#x6D4B;&#x805A;&#x7C7B;&#x7684;&#x503C;
dataframe['group'] = clusterer.predict(features)
print(dataframe.head(5))
#    feature_1  feature_2  group
# 0  -9.877554  -3.336145      0
# 1  -7.287210  -8.353986      2
# 2  -6.943061  -7.023744      2
# 3  -7.440167  -8.791959      2
# 4  -6.641388  -8.075888      2

04-7 缺失值处理

import numpy as np

# &#x521B;&#x5EFA;&#x7279;&#x5F81;&#x77E9;&#x9635;
features = np.array([[1.1, 11.1], [2.2, 22.2], [3.3, 33.3], [4.4, 44.4], [np.nan, 55]])
print(features)
# [[ 1.1 11.1]
#  [ 2.2 22.2]
#  [ 3.3 33.3]
#  [ 4.4 44.4]
#  [ nan 55. ]]

# -- &#x65B9;&#x6CD5;&#x4E00;&#xFF1A;&#x53EA;&#x4FDD;&#x7559;&#x6CA1;&#x6709;&#xFF08;~&#x8868;&#x793A;&#x53D6;&#x53CD;&#x8865;&#x96C6;&#xFF09;&#x7F3A;&#x5931;&#x503C;&#x7684;&#x89C2;&#x5BDF;&#x503C;
print(features[~np.isnan(features).any(axis = 1)])
# [[ 1.1 11.1]
#  [ 2.2 22.2]
#  [ 3.3 33.3]
#  [ 4.4 44.4]]

# -- &#x65B9;&#x6CD5;&#x4E8C;&#xFF1A;pd.dropna()
import pandas as pd
dataframe = pd.DataFrame(features, columns = ['feature_1', 'feature_2'])
# &#x5220;&#x9664;&#x5E26;&#x6709;&#x7F3A;&#x5931;&#x503C;&#x7684;&#x89C2;&#x5BDF;&#x503C;
print(dataframe.dropna())
#    feature_1  feature_2
# 0        1.1       11.1
# 1        2.2       22.2
# 2        3.3       33.3
# 3        4.4       44.4

# -- &#x586B;&#x5145;&#x7F3A;&#x5931;&#x503C;
# --- &#x65B9;&#x6CD5;&#x4E00;&#xFF1A;fancyimpute&#x6A21;&#x5757;
from fancyimpute import KNN
# &#x586B;&#x5145;&#x7B97;&#x6CD5;&#xFF1A;&#x6700;&#x8FD1;&#x90BB;&#x4F30;&#x7B97;&#xFF0C;&#x4F7F;&#x7528;&#x4E24;&#x884C;&#x90FD;&#x5177;&#x6709;&#x89C2;&#x6D4B;&#x6570;&#x636E;&#x7684;&#x7279;&#x5F81;&#x7684;&#x5747;&#x65B9;&#x5DEE;&#x6765;&#x5BF9;&#x6837;&#x672C;&#x8FDB;&#x884C;&#x52A0;&#x6743;&#x3002;&#x7136;&#x540E;&#x7528;&#x52A0;&#x6743;&#x7684;&#x7ED3;&#x679C;&#x8FDB;&#x884C;&#x7279;&#x5F81;&#x503C;&#x586B;&#x5145;
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_blobs

# &#x521B;&#x5EFA;&#x6A21;&#x62DF;&#x7279;&#x5F81;&#x77E9;&#x9635;
features, _ = make_blobs(n_samples = 1000, n_features = 2, random_state = 1)
print(features[:5])
# [[-3.05837272  4.48825769]
#  [-8.60973869 -3.72714879]
#  [ 1.37129721  5.23107449]
#  [-9.33917563 -2.9544469 ]
#  [-8.63895561 -8.05263469]]

# &#x6807;&#x51C6;&#x5316;&#x7279;&#x5F81;
scaler = StandardScaler()
standardized_features = scaler.fit_transform(features)
print(standardized_features[:5])
# [[ 0.87301861  1.31426523]
#  [-0.67073178 -0.22369263]
#  [ 2.1048424   1.45332359]
#  [-0.87357709 -0.07903966]
#  [-0.67885655 -1.03344137]]

# &#x66FF;&#x6362;&#x4E3A;&#x7F3A;&#x5931;&#x503C;
true_value = standardized_features[0,0]
standardized_features[0,0] = np.nan
print(standardized_features[:5])
# [[        nan  1.31426523]
#  [-0.67073178 -0.22369263]
#  [ 2.1048424   1.45332359]
#  [-0.87357709 -0.07903966]
#  [-0.67885655 -1.03344137]]

# &#x9884;&#x6D4B;&#x7279;&#x5F81;&#x77E9;&#x9635;&#x4E2D;&#x7684;&#x7F3A;&#x5931;&#x503C;
features_knn_imputed = KNN(k = 5, verbose = 0).fit_transform(standardized_features)
# &#x5BF9;&#x6BD4;&#x771F;&#x5B9E;&#x503C;&#x548C;&#x586B;&#x5145;&#x503C;
print('True:', true_value)
print('Imputed:', features_knn_imputed[0,0])
# True: 0.8730186113995938
# Imputed: 1.0955332713113226

# --- &#x65B9;&#x6CD5;&#x4E8C;&#xFF1A;sklearn&#x7684;Imputer&#x6A21;&#x5757;
# &#x7528;&#x7279;&#x5F81;&#x7684;&#x5E73;&#x5747;&#x6570;&#x3001;&#x4E2D;&#x4F4D;&#x6570;&#x6216;&#x4F17;&#x6570;&#x586B;&#x5145;&#x5747;&#x503C;&#xFF0C;&#x6548;&#x679C;&#x4E00;&#x822C;&#x6BD4;KNN&#x7684;&#x5DEE;
from sklearn.impute import SimpleImputer

# &#x521B;&#x5EFA;&#x586B;&#x5145;&#x5668;
mean_imputer = SimpleImputer(strategy = 'mean')
# &#x586B;&#x5145;&#x7F3A;&#x5931;&#x503C;
features_mean_imputed = mean_imputer.fit_transform(standardized_features)
# &#x5BF9;&#x6BD4;&#x771F;&#x5B9E;&#x503C;&#x548C;&#x586B;&#x5145;&#x503C;
print('True:', true_value)
print('Imputed:', features_knn_imputed[0,0])
# True: 0.8730186113995938
# Imputed: 1.0955332713113226

# &#x5982;&#x679C;&#x91C7;&#x7528;&#x586B;&#x5145;&#x7B56;&#x7565;&#xFF0C;&#x6700;&#x597D;&#x521B;&#x5EFA;&#x4E00;&#x4E2A;&#x65B0;&#x7684;&#x4E8C;&#x5143;&#x7279;&#x5F81;&#x6765;&#x8868;&#x793A;&#x8BE5;&#x89C2;&#x5BDF;&#x503C;&#x662F;&#x5426;&#x5177;&#x6709;&#x586B;&#x5145;&#x503C;&#xFF0C;&#x6709;&#x65F6;&#x7F3A;&#x5931;&#x503C;&#x4E5F;&#x662F;&#x4E00;&#x4E2A;&#x4FE1;&#x606F;

Original: https://www.cnblogs.com/camilia/p/16700449.html
Author: CAMILIA
Title: [Python]-sklearn模块-机器学习Python入门《Python机器学习手册》-04-处理数值型数据



相关阅读

Title: RTX3060+win10+CUDA11.2+cudnn8.2.0+tensorflow-gpu2.4.1 ——个人配置经验

主要参考博客
Win10+Pycharm+Anaconda3+显卡RTX3060配置tensorflow-gpu2.4.1

配置时间:2021.10.26
以下是我亲测有效的使用 RTX 3060 的各部分安装版本
电脑系统:window 10
python版本:3.6.13
tensorflow-gpu:2.4.1
CUDA版本:11.2
cuDNN版本:8.2.0.53

以下是我个人安装教程,仅供参考,如果出现新问题我恐怕可不能解决,谨慎参考,大神请随意~

注意,我之前已经成功配置pytorch1.8.0 环境,故这里不会详细写:安装Anaconda 以及 安装CUDA和cuDNN。

其他博客的顺序如下:
1、安装Anaconda 2、安装CUDA和cuDNN 3、 安装tensorflow-gpu2.4.1

文章目录

第一步:安装Anaconda

详细步骤可参考我的上一篇博客RTX3060+win10+CUDA11.2+cudnn8.2.0+pytorch1.8.0 环境——个人配置经验

1、打开anaconda prompt
2、命令行输入: conda create --name tf_gpu python=3.6
3、命令行输入: conda activate tf_gpu

第二步:安装CUDA和cuDNN

详细步骤可参考我的上一篇博客RTX3060+win10+CUDA11.2+cudnn8.2.0+pytorch1.8.0 环境——个人配置经验

没有进行任何改动

第三步:安装tensorflow-gpu2.4.1

参考博客Win10+Pycharm+Anaconda3+显卡RTX3060配置tensorflow-gpu2.4.1

1、打开anaconda prompt
2、命令行输入: conda activate tf_gpu
3、命令行输入:pip install tensorflow-gpu==2.4.1
[Python]-sklearn模块-机器学习Python入门《Python机器学习手册》-04-处理数值型数据
4、将该环境配置到Pycharm中:

在File, settings中找到Python Interpreter, 点击设置按钮选择"Add"
选择Existing environment
将自己刚配置的tensorflow-gpu环境的地址选中
[Python]-sklearn模块-机器学习Python入门《Python机器学习手册》-04-处理数值型数据
[Python]-sklearn模块-机器学习Python入门《Python机器学习手册》-04-处理数值型数据

; 测试

import tensorflow as tf

print(tf.__version__)
print(tf.test.is_gpu_available())

返回False
[Python]-sklearn模块-机器学习Python入门《Python机器学习手册》-04-处理数值型数据
采用原博主方法 Win10+Pycharm+Anaconda3+显卡RTX3060配置tensorflow-gpu2.4.1
虽然CUDA版本不一样,报错内容不一样,但是好像神奇的可以解决

解决方法:
1、在cuda安装目录下的bin文件夹下找到cusolver64_11.dll
我的目录是C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin
2、将其重新命名为cusolver64_10.dll
[Python]-sklearn模块-机器学习Python入门《Python机器学习手册》-04-处理数值型数据
返回True
[Python]-sklearn模块-机器学习Python入门《Python机器学习手册》-04-处理数值型数据

在测试一下有没有影响pytorch的GPU使用
[Python]-sklearn模块-机器学习Python入门《Python机器学习手册》-04-处理数值型数据
没有影响~

大功告成!

Original: https://blog.csdn.net/zxm_jimin/article/details/120980192
Author: zxm_
Title: RTX3060+win10+CUDA11.2+cudnn8.2.0+tensorflow-gpu2.4.1 ——个人配置经验