大众号:尤而小屋
作者:Peter
编辑:Peter

我们好,我是Peter~

今日给我们带来的是kaggle上面一份关于肿瘤数据的计算剖析,合适初学者快速入门,主要内容包含:

  • 依据直方图的频数计算
  • 依据四分位法的反常点定位剖析
  • 描绘计算剖析
  • 依据累计散布函数的剖析
  • 两两变量间剖析
  • 相关性剖析…

kaggle实战-肿瘤数据统计分析

这也是第21篇kaggle实战的文章,其他内容请移步至大众号相关文章:

kaggle实战-肿瘤数据统计分析

数据集

数据地址为:www.kaggle.com/code/kannca…

最初的数据来自UCI官网:archive.ics.uci.edu/ml/datasets…

kaggle实战-肿瘤数据统计分析

导入库

In [1]:

import pandas as pd
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use("ggplot")
import warnings
warnings.filterwarnings("ignore")

In [2]:

kaggle实战-肿瘤数据统计分析

基本信息

In [3]:

df.shape

Out[3]:

(569, 33)

In [4]:

df.isnull().sum()

Out[4]:

id                           0
diagnosis                    0
radius_mean                  0
texture_mean                 0
perimeter_mean               0
area_mean                    0
smoothness_mean              0
compactness_mean             0
concavity_mean               0
concave points_mean          0
symmetry_mean                0
fractal_dimension_mean       0
radius_se                    0
texture_se                   0
perimeter_se                 0
area_se                      0
smoothness_se                0
compactness_se               0
concavity_se                 0
concave points_se            0
symmetry_se                  0
fractal_dimension_se         0
radius_worst                 0
texture_worst                0
perimeter_worst              0
area_worst                   0
smoothness_worst             0
compactness_worst            0
concavity_worst              0
concave points_worst         0
symmetry_worst               0
fractal_dimension_worst      0
Unnamed: 32                569
dtype: int64

删除两个对剖析无效的字段:

In [5]:

df.drop(["Unnamed: 32", "id"],axis=1,inplace=True)

剩余的全部的字段:

In [6]:

columns = df.columns
columns

Out[6]:

Index(['diagnosis', 'radius_mean', 'texture_mean', 'perimeter_mean',       'area_mean', 'smoothness_mean', 'compactness_mean', 'concavity_mean',       'concave points_mean', 'symmetry_mean', 'fractal_dimension_mean',       'radius_se', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se',       'compactness_se', 'concavity_se', 'concave points_se', 'symmetry_se',       'fractal_dimension_se', 'radius_worst', 'texture_worst',       'perimeter_worst', 'area_worst', 'smoothness_worst',       'compactness_worst', 'concavity_worst', 'concave points_worst',       'symmetry_worst', 'fractal_dimension_worst'],
      dtype='object')

剖析1:直方图-Histogram

直方图计算的是每个值出现的频数

In [7]:

# radius_mean:均值
m = plt.hist(df[df["diagnosis"] == "M"].radius_mean,
             bins=30,
             fc=(1,0,0,0.5),
             label="Maligant"  # 恶性
            )
b = plt.hist(df[df["diagnosis"] == "B"].radius_mean,
             bins=30,
             fc=(0,1,0,0.5), 
             label="Bening"  # 良性
            )
plt.legend()
plt.xlabel("Radius Mean Values")
plt.ylabel("Frequency")
plt.title("Histogram of Radius Mean for Bening and Malignant Tumors")
plt.show()

kaggle实战-肿瘤数据统计分析

小结:

  1. 恶性肿瘤的半径平均值大多数是大于良性肿瘤
  2. 良性肿瘤(绿色)的散布大致上出现钟型,契合正态散布

剖析2:反常离群点剖析

依据数据的4分位数来确定反常点。

In [8]:

data_b = df[df["diagnosis"] == "B"]  # 良性肿瘤
data_m = df[df["diagnosis"] == "M"]
desc = data_b.radius_mean.describe()
q1 = desc[4]
q3 = desc[6]
iqr = q3 - q1
lower = q1 - 1.5*iqr
upper = q3 + 1.5*iqr
# 正常范围
print("正常范围: ({0}, {1})".format(round(lower,4), round(upper,4)))
正常范围: (7.645, 16.805)

In [9]:

# 反常点
print("Outliers:", data_b[(data_b.radius_mean < lower) | (data_b.radius_mean > upper)].radius_mean.values)
Outliers: [ 6.981 16.84  17.85 ]

剖析3:箱型图定位反常

从箱型图能够直观地看到数据的反常点

In [10]:

# 依据Plotly
fig = px.box(df, 
             x="diagnosis",
             y="radius_mean",
             color="diagnosis")
fig.show()

kaggle实战-肿瘤数据统计分析

kaggle实战-肿瘤数据统计分析

# 依据seaborn
melted_df = pd.melt(df, 
                    id_vars = "diagnosis",
                    value_vars = ['radius_mean', 'texture_mean'])
plt.figure(figsize=(15,10))
sns.boxplot(x="variable",
            y="value", 
            hue="diagnosis",
            data=melted_df
           )
plt.show()

kaggle实战-肿瘤数据统计分析

剖析4:描绘计算剖析describe

良性肿瘤数据data_b的描绘计算信息:

kaggle实战-肿瘤数据统计分析

# 针对肿瘤半径:radius_mean
print("mean: ",data_b.radius_mean.mean())
print("variance: ",data_b.radius_mean.var())
print("standart deviation (std): ",data_b.radius_mean.std())
print("describe method: ",data_b.radius_mean.describe())
# ----------------
mean:  12.14652380952381
variance:  3.170221722043872
standart deviation (std):  1.7805116461410389
describe method:  count    357.000000
mean      12.146524
std        1.780512
min        6.981000
25%       11.080000
50%       12.200000
75%       13.370000
max       17.850000
Name: radius_mean, dtype: float64

剖析5:CDF剖析(CDF累计散布函数)

CDF:Cumulative distribution function,中文名称是累计散布函数,表明的是变量取值小于或者等于x的概率。P(X <= x)

In [15]:

plt.hist(data_b.radius_mean,
        bins=50,
        fc=(0,1,0,0.5),
        label="Bening",
        normed=True,
        cumulative=True
        )
data_sorted=np.sort(data_b.radius_mean)
y = np.arange(len(data_sorted)) / float(len(data_sorted) - 1)
plt.title("CDF of Bening Tumor Radius Mean")
plt.plot(data_sorted,y,color="blue")
plt.show()

kaggle实战-肿瘤数据统计分析

剖析6:效应值剖析-Effect size

Effect size描绘的是两组数据之间的差异巨细。值越大,阐明两组数据的差异越明显。

一般规定为:

  • <0.2:效应小
  • [0.2,0.8]:中等效应
  • >0.8:大效应

在这里剖析的是良性和恶性肿瘤的radius_mean的值差异性

In [16]:

diff = data_m.radius_mean.mean() - data_b.radius_mean.mean()
var_b = data_b.radius_mean.var()
var_m = data_m.radius_mean.var()
var = (len(data_b) * var_b + len(data_m) * var_m) / float(len(data_b) + len(data_m))
effect_size = diff / np.sqrt(var)
print("Effect Size: ", effect_size)
Effect Size:  2.2048585165041428

很明显:这两组数据之间存在明显的效应;也和之间的结论吻合:良性肿瘤和恶性肿瘤的半径均值彼此间差异大

剖析7:两两变量间的联系

两个变量

运用散点图结合柱状图来表明

In [17]:

plt.figure(figsize = (15,10))
sns.jointplot(df.radius_mean,
              df.area_mean,
              kind="reg")
plt.show()

kaggle实战-肿瘤数据统计分析

能够看到这两个特征是正相关的

多个变量

In [18]:

sns.set(style="white")
df1 = df.loc[:,["radius_mean","area_mean","fractal_dimension_se"]]
g = sns.PairGrid(df1,diag_sharey = False,)
g.map_lower(sns.kdeplot,cmap="Blues_d")
g.map_upper(plt.scatter)
g.map_diag(sns.kdeplot,lw =3)
plt.show()

kaggle实战-肿瘤数据统计分析

剖析8:相关性剖析-热力求

In [19]:

corr = df.corr()  # 相联系数
f,ax = plt.subplots(figsize=(18,8))
sns.heatmap(corr,   # 相联系数
            annot=True,  
            linewidths=0.5,
            fmt=".1f",
            ax=ax
           )
# ticks的旋转角度
plt.xticks(rotation=90)
plt.yticks(rotation=0)
# 标题
plt.title('Correlation Map')
# 保存
plt.savefig('graph.png')
plt.show()

kaggle实战-肿瘤数据统计分析

剖析9:协方差剖析

协方差是衡量两个变量的变化趋势:

  • 假如它们变化方向相同,协方差最大
  • 假如它们是正交的,则协方差为零
  • 假如指向相反的方向,则协方差为负数

In [20]:

# 协方差矩阵
np.cov(df.radius_mean, df.area_mean)

Out[20]:

array([[1.24189201e+01, 1.22448341e+03],
       [1.22448341e+03, 1.23843554e+05]])

In [21]:

# 两个变量的协方差值
df.radius_mean.cov(df.area_mean)

Out[21]:

1224.483409346457

In [22]:

# 两个变量的协方差值
df.radius_mean.cov(df.fractal_dimension_se)

Out[22]:

-0.0003976248576440629

剖析10:Pearson Correlation

假设有两个数组,AB,则皮尔逊相联系数定义为:

Pearson=cov(A,B)std(A)∗std(B)

In [23]:

p1 = df.loc[:,["area_mean","radius_mean"]].corr(method= "pearson")
p2 = df.radius_mean.cov(df.area_mean)/(df.radius_mean.std()*df.area_mean.std())
print('Pearson Correlation Metric: \n',p1)
Pearson Correlation Metric: 
              area_mean  radius_mean
area_mean     1.000000     0.987357
radius_mean   0.987357     1.000000

In [24]:

print('Pearson Correlation Value: \n', p2)
Pearson Correlation Value: 
 0.9873571700566132

剖析11:Spearman’s Rank Correlation

Spearman’s Rank Correlation,中文能够称之为:斯皮尔曼下的排序相关性。

皮尔逊相联系数在求解的时分,需求变量之间是线性的,且大体上是正态散布的

但是假如当数据中存在反常值,或者变量的散布不是正态的,最好不要运用皮尔逊相联系数。

在这里采用依据斯皮尔曼的排序相联系数。

In [25]:

df_rank = df.rank()
spearman_corr = df_rank.loc[:,["area_mean","radius_mean"]].corr(method= "spearman")
spearman_corr  # 依据斯皮尔曼的系数矩阵

Out[25]:

area_mean radius_mean
area_mean 1.000000 0.999602
radius_mean 0.999602 1.000000

对比皮尔逊相联系数和斯皮尔曼系数:

  1. 现有数据下,斯皮尔曼相关性比皮尔逊相联系数要大一点
  2. 当数据中存在反常离群点的时分,斯皮尔曼相关性系数具有更好的鲁棒性

数据获取

关注大众号【尤而小屋】后台回复肿瘤,能够获取本文数据集,仅供学习运用。