【GPUSOROBAN】深層学習を用いた顔のキーポイント検出のモデル作成にはとても便利な「画像データ拡張テクニック」のご紹介

Contents 

はじめに

*本記事ではGPUSOROBANのインスタンスにてJupyter Labを使用して深層学習を実行した事例を紹介しております。GPUSOROBANをご利用の場合はインスタンスの作成およびJupyter Labの起動ができるようにご準備をお願いします。

本記事では顔の画像認識を行う際に用いられるテクニックについて紹介します。学習の内容の詳細が気になる方は、最下部に記載の「参考記事」をご覧ください。

深層学習にて顔の画像認識を行う場合、顔の特徴的な箇所(キーポイント)の位置を学習させた学習モデルを作成して、推論時にキーポイントの位置を特定できるようにする、といった手法がよく用いられます。

このような学習モデルを作成するときはいくつかのポイントに注意しなければ、過学習を起こして学習時間が長くなってしまいがちです。また、画像データが少ない場合では十分な学習をさせることができない状況も多々あります。集められた画像データにてデータが一部欠けていることもよくあります。

以下にて画像データが欠けている画像を分別し、処理する方法を紹介します。そして、限られた画像データを拡張し、学習する画像データを増やす方法を紹介します。

環境条件

推奨最低GPUメモリ:40GB

TensorFlow バーション:2.4.1

*モデルトレーニング時間:約12分、完全なデータセットにモデルを当てはめる時間:約12分、全体実行時間:約24分

実行環境:GPUSOROBAN nvd4-1dlタイプのインスタンス

データセットとコードのダウンロード

GPUSOROBANのインスタンスを起動して、今回使用するデータセットおよびソースコードをgithubからダウンロードします。

gitがインストールされていない場合、下記コマンドにてインストールしてください。

sudo apt -y update
sudo apt -y install git  

続いて下記コマンドにて必要なデータ類をダウンロードします。                                                                            

git clone https://github.com/highreso/Data_Augmentation.git

上記で得られたData_Augmentationフォルダ内のData_Augmentation_for_Facial_Keypoint_Detection.ipynbについて、各セルの内容を確認しながら順番に実行してみましょう。

Conda仮想環境の設定

今回はtensorflow24_py36という仮想環境を使用しました。下図の赤枠の箇所にてconda仮想環境を選択します。

まず最初に必要なパッケージを下記のようにインストールします。

必要なモジュールのインストール

In [1]:

!pip install -U pandas==1.1.5
!pip install -U matplotlib==3.3.4
!pip install -U opencv-python==4.4.0.46
!pip install -U tqdm==4.64.0

import sys
sys.path.append('/home/user/.local/lib/python3.6/site-packages')

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
%matplotlib inline
from math import sin, cos, pi
import cv2
from tqdm.notebook import tqdm
from keras.layers.advanced_activations import LeakyReLU
from keras.models import Sequential, Model, load_model
from keras.layers import Activation, Convolution2D, MaxPooling2D, BatchNormalization, Flatten, Dense, Dropout, Conv2D,MaxPool2D, ZeroPadding2D
from keras.callbacks import ReduceLROnPlateau, ModelCheckpoint
from keras.optimizers import Adam
import keras; print(keras.__version__)

2.4.3

画像データ拡張方法の設定

In [2]:

horizontal_flip = False
rotation_augmentation = True
brightness_augmentation = True
shift_augmentation = True
random_noise_augmentation = True
include_unclean_data = True    # キーポイントの値が欠損しているサンプルを含めるかどうか。ただし、欠損値は後でPandasの'ffill'を使って埋められることに注意してください。
sample_image_index = 20    # 様々な拡張機能を可視化するために使用されるサンプルトレイン画像のインデックス
rotation_angles = [12]    # 回転角度(度)(時計回り、反時計回りの回転を含む
pixel_shifts = [12]    # 水平・垂直方向のシフト量(ピクセル)(4隅からのシフトを含む
NUM_EPOCHS = 80
BATCH_SIZE = 64
Copy

Pandas DataFrameへ移行します。

In [3]:

%%time
train_file = 'data/training.csv'
test_file = 'data/test.csv'
idlookup_file = 'data/IdLookupTable.csv'
train_data = pd.read_csv(train_file)
test_data = pd.read_csv(test_file)
idlookup_data = pd.read_csv(idlookup_file)

CPU times: user 1.9 s, sys: 180 ms, total: 2.08 s
Wall time: 2.07 s

画像に顔のキーポイントをプロットする機能です。

In [4]:

def plot_sample(image, keypoint, axis, title):
    image = image.reshape(96,96)
    axis.imshow(image, cmap='gray')
    axis.scatter(keypoint[0::2], keypoint[1::2], marker='x', s=20)
    plt.title(title)

.headコマンドで、データの先頭から5つの要素を抜き出し、確認します。

In [5]:

train_data.head().T
Copy

Out[5]:

01234
left_eye_center_x66.033664.332965.057165.225766.7253
left_eye_center_y39.002334.970134.909637.261839.6213
right_eye_center_x30.22729.949330.903832.023132.2448
right_eye_center_y36.421733.448734.909637.261838.042
left_eye_inner_corner_x59.582158.856259.41260.003358.5659
left_eye_inner_corner_y39.647435.274336.32139.127239.6213
left_eye_outer_corner_x73.130370.722770.984472.314772.5159
left_eye_outer_corner_y39.9736.187236.32138.38139.8845
right_eye_inner_corner_x36.356636.034737.678137.618636.9824
right_eye_inner_corner_y37.389434.361536.32138.754139.0949
right_eye_outer_corner_x23.452924.472524.976425.307322.5061
right_eye_outer_corner_y37.389433.144436.603238.007938.3052
left_eyebrow_inner_end_x56.953353.987455.742556.433857.2496
left_eyebrow_inner_end_y29.033628.275927.570930.929930.6722
left_eyebrow_outer_end_x80.227178.634278.887477.910377.7629
left_eyebrow_outer_end_y32.228130.405932.651631.665731.7372
right_eyebrow_inner_end_x40.227642.728942.193941.671538.0354
right_eyebrow_inner_end_y29.002326.14628.135531.0530.9354
right_eyebrow_outer_end_x16.356416.865416.791220.45815.9259
right_eyebrow_outer_end_y29.647527.058932.087129.909330.6722
nose_tip_x44.420648.206347.557351.885143.2995
nose_tip_y57.066855.660953.538954.166564.8895
mouth_left_corner_x61.195356.421460.822965.598960.6714
mouth_left_corner_y79.970276.35273.014372.703777.5232
mouth_right_corner_x28.614535.122433.726337.245531.1918
mouth_right_corner_y77.38976.047772.73274.195576.9973
mouth_center_top_lip_x43.312646.684647.274950.303244.9627
mouth_center_top_lip_y72.935570.266670.191870.091773.7074
mouth_center_bottom_lip_x43.130745.467947.274951.561244.2271
mouth_center_bottom_lip_y84.485885.480278.659478.268486.8712
Image238 236 237 238 240 240 239 241 241 243 240 23…219 215 204 196 204 211 212 200 180 168 178 19…144 142 159 180 188 188 184 180 167 132 84 59 …193 192 193 194 194 194 193 192 168 111 50 12 …147 148 160 196 215 214 216 217 219 220 206 18…

In [6]:

test_data.head()

Out[6]:

ImageIdImage
01182 183 182 182 180 180 176 169 156 137 124 10…
1276 87 81 72 65 59 64 76 69 42 31 38 49 58 58 4…
23177 176 174 170 169 169 168 166 166 166 161 14…
34176 174 174 175 174 174 176 176 175 171 165 15…
4550 47 44 101 144 149 120 58 48 42 35 35 37 39 …

In [7]:

idlookup_data.head().T

Out[7]:

01234
RowId12345
ImageId11111
FeatureNameleft_eye_center_xleft_eye_center_yright_eye_center_xright_eye_center_yleft_eye_inner_corner_x
LocationNaNNaNNaNNaNNaN

次に、ピクセルデータが欠落している画像はないかの確認します。

In [8]:

print("トレインデータの長さ: {}".format(len(train_data)))
print("画素値が欠損している画像の数: {}".format(len(train_data) - int(train_data.Image.apply(lambda x: len(x.split())).value_counts().values)))

トレインデータの長さ: 7049
画素値が欠損している画像の数: 0

下記では空の文字列(Null)を有するデータの数を確認しています。

Find columns having Null values and their counts

In [9]:

train_data.isnull().sum()

Out[9]:

left_eye_center_x                       10
left_eye_center_y                          10
right_eye_center_x                       13
right_eye_center_y                       13
left_eye_inner_corner_x           4778
left_eye_inner_corner_y           4778
left_eye_outer_corner_x           4782
left_eye_outer_corner_y           4782
right_eye_inner_corner_x         4781
right_eye_inner_corner_y         4781
right_eye_outer_corner_x         4781
right_eye_outer_corner_y         4781
left_eyebrow_inner_end_x        4779
left_eyebrow_inner_end_y        4779
left_eyebrow_outer_end_x        4824
left_eyebrow_outer_end_y        4824
right_eyebrow_inner_end_x      4779
right_eyebrow_inner_end_y      4779
right_eyebrow_outer_end_x      4813
right_eyebrow_outer_end_y      4813
nose_tip_x                                       0
nose_tip_y                                       0
mouth_left_corner_x                  4780
mouth_left_corner_y                  4780
mouth_right_corner_x             4779
mouth_right_corner_y                4779
mouth_center_top_lip_x            4774
mouth_center_top_lip_y            4774
mouth_center_bottom_lip_x          33
mouth_center_bottom_lip_y          33
Image                                              0
dtype: int64

In [10]:

%%time
clean_train_data = train_data.dropna()
print("clean_train_data 形状: {}".format(np.shape(clean_train_data)))
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html
unclean_train_data = train_data.fillna(method = 'ffill')
print("unclean_train_data 形状: {}\n".format(np.shape(unclean_train_data)))

clean_train_data 形状: (2140, 31)
unclean_train_data 形状: (7049, 31)
CPU times: user 7.05 ms, sys: 184 µs, total: 7.23 ms
Wall time: 6.58 ms

ここでは約70%のデータに不備があったことが分かりました。

次にデータをクリーンサブセットとクリーンでないサブセットに分離します。

In [11]:

%%time
def load_images(image_data):
    images = []
    for idx, sample in image_data.iterrows():
        image = np.array(sample['Image'].split(' '), dtype=int)
        image = np.reshape(image, (96,96,1))
        images.append(image)
    images = np.array(images)/255.
    return images
def load_keypoints(keypoint_data):
    keypoint_data = keypoint_data.drop('Image',axis = 1)
    keypoint_features = []
    for idx, sample_keypoints in keypoint_data.iterrows():
        keypoint_features.append(sample_keypoints)
    keypoint_features = np.array(keypoint_features, dtype = 'float')
    return keypoint_features
clean_train_images = load_images(clean_train_data)
print("Shape of clean_train_images: {}".format(np.shape(clean_train_images)))
clean_train_keypoints = load_keypoints(clean_train_data)
print("Shape of clean_train_keypoints: {}".format(np.shape(clean_train_keypoints)))
test_images = load_images(test_data)
print("Shape of test_images: {}".format(np.shape(test_images)))
train_images = clean_train_images
train_keypoints = clean_train_keypoints
fig, axis = plt.subplots()
plot_sample(clean_train_images[sample_image_index], clean_train_keypoints[sample_image_index], axis, "Sample image & keypoints")
if include_unclean_data:
    unclean_train_images = load_images(unclean_train_data)
    print("Shape of unclean_train_images: {}".format(np.shape(unclean_train_images)))
    unclean_train_keypoints = load_keypoints(unclean_train_data)
    print("Shape of unclean_train_keypoints: {}\n".format(np.shape(unclean_train_keypoints)))
    train_images = np.concatenate((train_images, unclean_train_images))
    train_keypoints = np.concatenate((train_keypoints, unclean_train_keypoints))

Shape of clean_train_images: (2140, 96, 96, 1)
Shape of clean_train_keypoints: (2140, 30)
Shape of test_images: (1783, 96, 96, 1)
Shape of unclean_train_images: (7049, 96, 96, 1)
Shape of unclean_train_keypoints: (7049, 30)
CPU times: user 15.8 s, sys: 820 ms, total: 16.6 s
Wall time: 16.6 s

画像をミラー反転させる関数です。

In [12]:

def left_right_flip(images, keypoints):
    flipped_keypoints = []
    flipped_images = np.flip(images, axis=2)   # Flip column-wise (axis=2)
    for idx, sample_keypoints in enumerate(keypoints):
        flipped_keypoints.append([96.-coor if idx%2==0 else coor for idx,coor in enumerate(sample_keypoints)])    # Subtract only X co-ordinates of keypoints from 96 for horizontal flipping
    return flipped_images, flipped_keypoints
if horizontal_flip:
    flipped_train_images, flipped_train_keypoints = left_right_flip(clean_train_images, clean_train_keypoints)
    print("Shape of flipped_train_images: {}".format(np.shape(flipped_train_images)))
    print("Shape of flipped_train_keypoints: {}".format(np.shape(flipped_train_keypoints)))
    train_images = np.concatenate((train_images, flipped_train_images))
    train_keypoints = np.concatenate((train_keypoints, flipped_train_keypoints))
    fig, axis = plt.subplots()
    plot_sample(flipped_train_images[sample_image_index], flipped_train_keypoints[sample_image_index], axis, "Horizontally Flipped") 

画像を回転するための実行関数

In [13]:

def rotate_augmentation(images, keypoints):
    rotated_images = []
    rotated_keypoints = []
    print("Augmenting for angles (in degrees): ")
    for angle in rotation_angles:    # 角度値のリストに対する回転の補強
        for angle in [angle,-angle]:
            print(f'{angle}', end='  ')
            M = cv2.getRotationMatrix2D((48,48), angle, 1.0)
            angle_rad = -angle*pi/180.     # 角度を度単位からラジアン単位で取得します(従来の回転と cv2 の画像回転では、時計回りと反時計回りの方向が異なるため、負の符号が付くことに注意)
            # train_imagesの場合
            for image in images:
                rotated_image = cv2.warpAffine(image, M, (96,96), flags=cv2.INTER_CUBIC)
                rotated_images.append(rotated_image)
            # For train_keypoints
            for keypoint in keypoints:
                rotated_keypoint = keypoint - 48.    # 画像寸法の中間値を減算する
                for idx in range(0,len(rotated_keypoint),2):
                    # https://in.mathworks.com/matlabcentral/answers/93554-how-can-i-rotate-a-set-of-points-in-a-plane-by-a-certain-angle-about-an-arbitrary-point
                    rotated_keypoint[idx] = rotated_keypoint[idx]*cos(angle_rad)-rotated_keypoint[idx+1]*sin(angle_rad)
                    rotated_keypoint[idx+1] = rotated_keypoint[idx]*sin(angle_rad)+rotated_keypoint[idx+1]*cos(angle_rad)
                rotated_keypoint += 48.   # 先に減算した値を加算する
                rotated_keypoints.append(rotated_keypoint)

    return np.reshape(rotated_images,(-1,96,96,1)), rotated_keypoints
if rotation_augmentation:
    rotated_train_images, rotated_train_keypoints = rotate_augmentation(clean_train_images, clean_train_keypoints)
    print("\nShape of rotated_train_images: {}".format(np.shape(rotated_train_images)))
    print("Shape of rotated_train_keypoints: {}\n".format(np.shape(rotated_train_keypoints)))
    train_images = np.concatenate((train_images, rotated_train_images))
    train_keypoints = np.concatenate((train_keypoints, rotated_train_keypoints))
    fig, axis = plt.subplots()
    plot_sample(rotated_train_images[sample_image_index], rotated_train_keypoints[sample_image_index], axis, "Rotation Augmentation")

Augmenting for angles (in degrees):
12 -12
Shape of rotated_train_images: (4280, 96, 96, 1)
Shape of rotated_train_keypoints: (4280, 30)

画像の輝度を変更するための関数

In [14]:

def alter_brightness(images, keypoints):
    altered_brightness_images = []
    inc_brightness_images = np.clip(images*1.2, 0.0, 1.0)    # 輝度を1.2倍にして、[-1,1]の範囲外の値をクリップする。
    dec_brightness_images = np.clip(images*0.6, 0.0, 1.0)    # 輝度を0.6倍に下げ、[-1,1]の範囲外の値をクリップする。
    altered_brightness_images.extend(inc_brightness_images)
    altered_brightness_images.extend(dec_brightness_images)
    return altered_brightness_images, np.concatenate((keypoints, keypoints))
if brightness_augmentation:
    altered_brightness_train_images, altered_brightness_train_keypoints = alter_brightness(clean_train_images, clean_train_keypoints)
    print(f"Shape of altered_brightness_train_images: {np.shape(altered_brightness_train_images)}")
    print(f"Shape of altered_brightness_train_keypoints: {np.shape(altered_brightness_train_keypoints)}")
    train_images = np.concatenate((train_images, altered_brightness_train_images))
    train_keypoints = np.concatenate((train_keypoints, altered_brightness_train_keypoints))
    fig, axis = plt.subplots()
    plot_sample(altered_brightness_train_images[sample_image_index], altered_brightness_train_keypoints[sample_image_index], axis, "Increased Brightness") 
    fig, axis = plt.subplots()
    plot_sample(altered_brightness_train_images[len(altered_brightness_train_images)//2+sample_image_index], altered_brightness_train_keypoints[len(altered_brightness_train_images)//2+sample_image_index], axis, "Decreased Brightness") 

Shape of altered_brightness_train_images: (4280, 96, 96, 1)
Shape of altered_brightness_train_keypoints: (4280, 30)

水平及び垂直の平行移動のための関数

In [15]:

def shift_images(images, keypoints):
    shifted_images = []
    shifted_keypoints = []
    for shift in pixel_shifts:    # Augmenting over several pixel shift values
        for (shift_x,shift_y) in [(-shift,-shift),(-shift,shift),(shift,-shift),(shift,shift)]:
            M = np.float32([[1,0,shift_x],[0,1,shift_y]])
            for image, keypoint in zip(images, keypoints):
                shifted_image = cv2.warpAffine(image, M, (96,96), flags=cv2.INTER_CUBIC)
                shifted_keypoint = np.array([(point+shift_x) if idx%2==0 else (point+shift_y) for idx, point in enumerate(keypoint)])
                if np.all(0.0<shifted_keypoint) and np.all(shifted_keypoint<96.0):
                    shifted_images.append(shifted_image.reshape(96,96,1))
                    shifted_keypoints.append(shifted_keypoint)
    shifted_keypoints = np.clip(shifted_keypoints,0.0,96.0)
    return shifted_images, shifted_keypoints
if shift_augmentation:
    shifted_train_images, shifted_train_keypoints = shift_images(clean_train_images, clean_train_keypoints)
    print(f"Shape of shifted_train_images: {np.shape(shifted_train_images)}")
    print(f"Shape of shifted_train_keypoints: {np.shape(shifted_train_keypoints)}")
    train_images = np.concatenate((train_images, shifted_train_images))
    train_keypoints = np.concatenate((train_keypoints, shifted_train_keypoints))
    fig, axis = plt.subplots()
    plot_sample(shifted_train_images[sample_image_index], shifted_train_keypoints[sample_image_index], axis, "Shift Augmentation")

Shape of shifted_train_images: (6350, 96, 96, 1)
Shape of shifted_train_keypoints: (6350, 30)

ランダムノイズを追加したための関数

In [16]:

def add_noise(images):
    noisy_images = []
    for image in images:
        noisy_image = cv2.add(image, 0.008*np.random.randn(96,96,1))    # 入力画像にランダムな正規のノイズを加え、得られたノイズ画像を[-1,1]の間で切り取る。
        noisy_images.append(noisy_image.reshape(96,96,1))
    return noisy_images
if random_noise_augmentation:
    noisy_train_images = add_noise(clean_train_images)
    print(f"Shape of noisy_train_images: {np.shape(noisy_train_images)}")
    train_images = np.concatenate((train_images, noisy_train_images))
    train_keypoints = np.concatenate((train_keypoints, clean_train_keypoints))
    fig, axis = plt.subplots()
    plot_sample(noisy_train_images[sample_image_index], clean_train_keypoints[sample_image_index], axis, "Random Noise Augmentation")

Shape of noisy_train_images: (2140, 96, 96, 1)

対応する学習画像のキーポイントの可視化

In [17]:

print("最終的なtrain_imagesの形状: {}".format(np.shape(train_images)))
print("最終的なtrain_keypointsの形状: {}".format(np.shape(train_keypoints)))
print("\n クリーントレイン・データ ")
fig = plt.figure(figsize=(20,8))
for i in range(10):
    axis = fig.add_subplot(2, 5, i+1, xticks=[], yticks=[])
    plot_sample(clean_train_images[i], clean_train_keypoints[i], axis, "")
plt.show()
if include_unclean_data:
    print("アンクリーンなトレインデータ ")
    fig = plt.figure(figsize=(20,8))
    for i in range(10):
        axis = fig.add_subplot(2, 5, i+1, xticks=[], yticks=[])
        plot_sample(unclean_train_images[i], unclean_train_keypoints[i], axis, "")
    plt.show()
if horizontal_flip:
    print("ホリゾンタル フリップ オーギュメンテーション: ")
    fig = plt.figure(figsize=(20,8))
    for i in range(10):
        axis = fig.add_subplot(2, 5, i+1, xticks=[], yticks=[])
        plot_sample(flipped_train_images[i], flipped_train_keypoints[i], axis, "")
    plt.show()
if rotation_augmentation:
    print("ローテーション・オーギュメンテーション: ")
    fig = plt.figure(figsize=(20,8))
    for i in range(10):
        axis = fig.add_subplot(2, 5, i+1, xticks=[], yticks=[])
        plot_sample(rotated_train_images[i], rotated_train_keypoints[i], axis, "")
    plt.show()

if brightness_augmentation:
    print("輝度補強: ")
    fig = plt.figure(figsize=(20,8))
    for i in range(10):
        axis = fig.add_subplot(2, 5, i+1, xticks=[], yticks=[])
        plot_sample(altered_brightness_train_images[i], altered_brightness_train_keypoints[i], axis, "")
    plt.show()
if shift_augmentation:
    print("シフト補強: ")
    fig = plt.figure(figsize=(20,8))
    for i in range(10):
        axis = fig.add_subplot(2, 5, i+1, xticks=[], yticks=[])
        plot_sample(shifted_train_images[i], shifted_train_keypoints[i], axis, "")
    plt.show()

if random_noise_augmentation:
    print("ランダムノイズの補強: ")
    fig = plt.figure(figsize=(20,8))
    for i in range(10):
        axis = fig.add_subplot(2, 5, i+1, xticks=[], yticks=[])
        plot_sample(noisy_train_images[i], clean_train_keypoints[i], axis, "")
    plt.show()

最終的なtrain_imagesの形状: (26239, 96, 96, 1)
最終的なtrain_keypointsの形状: (26239, 30)

Clean Train Data:

Unclean Train Data:

Rotation Augmentation:

Brightness Augmentation:

Shift Augmentation:

Random Noise Augmentation:

モデルの構築

In [18]:

model = Sequential()
# 入力寸法: (None, 96, 96, 1)
model.add(Convolution2D(32, (3,3), padding='same', use_bias=False, input_shape=(96,96,1)))
model.add(LeakyReLU(alpha = 0.1))
model.add(BatchNormalization())
# 入力寸法: (None, 96, 96, 32)
model.add(Convolution2D(32, (3,3), padding='same', use_bias=False))
model.add(LeakyReLU(alpha = 0.1))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
# 入力寸法: (None, 48, 48, 32)
model.add(Convolution2D(64, (3,3), padding='same', use_bias=False))
model.add(LeakyReLU(alpha = 0.1))
model.add(BatchNormalization())
# 入力寸法: (None, 48, 48, 64)
model.add(Convolution2D(64, (3,3), padding='same', use_bias=False))
model.add(LeakyReLU(alpha = 0.1))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
# 入力寸法: (None, 24, 24, 64)
model.add(Convolution2D(96, (3,3), padding='same', use_bias=False))
model.add(LeakyReLU(alpha = 0.1))
model.add(BatchNormalization())
# 入力寸法: (None, 24, 24, 96)
model.add(Convolution2D(96, (3,3), padding='same', use_bias=False))
model.add(LeakyReLU(alpha = 0.1))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
# 入力寸法: (None, 12, 12, 96)
model.add(Convolution2D(128, (3,3),padding='same', use_bias=False))
model.add(LeakyReLU(alpha = 0.1))
model.add(BatchNormalization())
# 入力寸法: (None, 12, 12, 128)
model.add(Convolution2D(128, (3,3),padding='same', use_bias=False))
model.add(LeakyReLU(alpha = 0.1))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
# 入力寸法: (None, 6, 6, 128)
model.add(Convolution2D(256, (3,3),padding='same',use_bias=False))
model.add(LeakyReLU(alpha = 0.1))
model.add(BatchNormalization())
# 入力寸法: (None, 6, 6, 256)
model.add(Convolution2D(256, (3,3),padding='same',use_bias=False))
model.add(LeakyReLU(alpha = 0.1))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
# 入力寸法: (None, 3, 3, 256)
model.add(Convolution2D(512, (3,3), padding='same', use_bias=False))
model.add(LeakyReLU(alpha = 0.1))
model.add(BatchNormalization())
# 入力寸法: (None, 3, 3, 512)
model.add(Convolution2D(512, (3,3), padding='same', use_bias=False))
model.add(LeakyReLU(alpha = 0.1))
model.add(BatchNormalization())
# 入力寸法: (None, 3, 3, 512)
model.add(Flatten())
model.add(Dense(512,activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(30))
model.summary()

Model: "sequential"

Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 96, 96, 32) 288

leakyrelu (LeakyReLU) (None, 96, 96, 32) 0

batchnormalization (BatchNo (None, 96, 96, 32) 128

conv2d1 (Conv2D) (None, 96, 96, 32) 9216

leaky_relu1 (LeakyReLU) (None, 96, 96, 32) 0

batchnormalization1 (Batch (None, 96, 96, 32) 128

maxpooling2d (MaxPooling2D) (None, 48, 48, 32) 0

conv2d2 (Conv2D) (None, 48, 48, 64) 18432

leakyrelu2 (LeakyReLU) (None, 48, 48, 64) 0

batchnormalization2 (Batch (None, 48, 48, 64) 256

conv2d3 (Conv2D) (None, 48, 48, 64) 36864

leakyrelu3 (LeakyReLU) (None, 48, 48, 64) 0

batchnormalization3 (Batch (None, 48, 48, 64) 256

maxpooling2d1 (MaxPooling2 (None, 24, 24, 64) 0

conv2d4 (Conv2D) (None, 24, 24, 96) 55296

leakyrelu4 (LeakyReLU) (None, 24, 24, 96) 0

batchnormalization4 (Batch (None, 24, 24, 96) 384

conv2d5 (Conv2D) (None, 24, 24, 96) 82944

leakyrelu5 (LeakyReLU) (None, 24, 24, 96) 0

batchnormalization5 (Batch (None, 24, 24, 96) 384

maxpooling2d2 (MaxPooling2 (None, 12, 12, 96) 0

conv2d6 (Conv2D) (None, 12, 12, 128) 110592

leakyrelu6 (LeakyReLU) (None, 12, 12, 128) 0

batchnormalization6 (Batch (None, 12, 12, 128) 512

conv2d7 (Conv2D) (None, 12, 12, 128) 147456

leakyrelu7 (LeakyReLU) (None, 12, 12, 128) 0

batchnormalization7 (Batch (None, 12, 12, 128) 512

maxpooling2d3 (MaxPooling2 (None, 6, 6, 128) 0

conv2d8 (Conv2D) (None, 6, 6, 256) 294912

leakyrelu8 (LeakyReLU) (None, 6, 6, 256) 0

batchnormalization8 (Batch (None, 6, 6, 256) 1024

conv2d9 (Conv2D) (None, 6, 6, 256) 589824

leakyrelu9 (LeakyReLU) (None, 6, 6, 256) 0

batchnormalization9 (Batch (None, 6, 6, 256) 1024

maxpooling2d4 (MaxPooling2 (None, 3, 3, 256) 0

conv2d10 (Conv2D) (None, 3, 3, 512) 1179648

leakyrelu10 (LeakyReLU) (None, 3, 3, 512) 0

batchnormalization10 (Batc (None, 3, 3, 512) 2048

conv2d11 (Conv2D) (None, 3, 3, 512) 2359296

leakyrelu11 (LeakyReLU) (None, 3, 3, 512) 0

batchnormalization11 (Batc (None, 3, 3, 512) 2048

flatten (Flatten) (None, 4608) 0

dense (Dense) (None, 512) 2359808

dropout (Dropout) (None, 512) 0

dense1 (Dense) (None, 30) 15390
=================================================================
Total params: 7,268,670
Trainable params: 7,264,318
Non-trainable params: 4,352

モデルのコンパイル

In [19]:

%%time
# 事前に学習したモデルをロードする(存在する場合)
if os.path.exists('best_model.hdf5'):
    model = load_model('best_model.hdf5')
# 必要なコールバックの定義
checkpointer = ModelCheckpoint(filepath = 'best_model.hdf5', monitor='val_mae', verbose=1, save_best_only=True, mode='min')
# モデルをコンパイルする
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['mae', 'acc'])
history = model.fit(train_images, train_keypoints, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, validation_split=0.05, callbacks=[checkpointer])

Epoch 1/80
390/390 [==============================] – 17s 22ms/step – loss: 1.2687 – mae: 0.8489 – acc: 0.9261 – val_loss:
0.2112 – val_mae: 0.3256 – val_acc: 0.9505
Epoch 00001: val_mae improved from inf to 0.32560, saving model to best_model.hdf5
Epoch 2/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2590 – mae: 0.8412 – acc: 0.9288 – val_loss:
0.1967 – val_mae: 0.3169 – val_acc: 0.9512
Epoch 00002: val_mae improved from 0.32560 to 0.31686, saving model to best_model.hdf5
Epoch 3/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2581 – mae: 0.8459 – acc: 0.9258 – val_loss:
0.2040 – val_mae: 0.3227 – val_acc: 0.9497
Epoch 00003: val_mae did not improve from 0.31686
Epoch 4/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2578 – mae: 0.8378 – acc: 0.9292 – val_loss:
0.2460 – val_mae: 0.3715 – val_acc: 0.9535
Epoch 00004: val_mae did not improve from 0.31686
Epoch 5/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2553 – mae: 0.8460 – acc: 0.9287 – val_loss:
0.1839 – val_mae: 0.3023 – val_acc: 0.9573
Epoch 00005: val_mae improved from 0.31686 to 0.30228, saving model to best_model.hdf5
Epoch 6/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2357 – mae: 0.8359 – acc: 0.9278 – val_loss:
0.1911 – val_mae: 0.3089 – val_acc: 0.9566
Epoch 00006: val_mae did not improve from 0.30228
Epoch 7/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2242 – mae: 0.8319 – acc: 0.9240 – val_loss:
0.4524 – val_mae: 0.5374 – val_acc: 0.9482
Epoch 00007: val_mae did not improve from 0.30228
Epoch 8/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2410 – mae: 0.8458 – acc: 0.9288 – val_loss:
0.2670 – val_mae: 0.3835 – val_acc: 0.9520
Epoch 00008: val_mae did not improve from 0.30228
Epoch 9/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2293 – mae: 0.8397 – acc: 0.9295 – val_loss:
0.3407 – val_mae: 0.4508 – val_acc: 0.9497
Epoch 00009: val_mae did not improve from 0.30228
Epoch 10/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2669 – mae: 0.8494 – acc: 0.9263 – val_loss:
0.1990 – val_mae: 0.3179 – val_acc: 0.9459
Epoch 00010: val_mae did not improve from 0.30228
Epoch 11/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2297 – mae: 0.8351 – acc: 0.9325 – val_loss:
0.2137 – val_mae: 0.3349 – val_acc: 0.9512
Epoch 00011: val_mae did not improve from 0.30228
Epoch 12/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2370 – mae: 0.8351 – acc: 0.9272 – val_loss:
0.2002 – val_mae: 0.3165 – val_acc: 0.9505
Epoch 00012: val_mae did not improve from 0.30228
Epoch 13/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2574 – mae: 0.8342 – acc: 0.9261 – val_loss:
0.2315 – val_mae: 0.3559 – val_acc: 0.9581
Epoch 00013: val_mae did not improve from 0.30228
Epoch 14/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2439 – mae: 0.8390 – acc: 0.9275 – val_loss:
0.1979 – val_mae: 0.3145 – val_acc: 0.9573
Epoch 00014: val_mae did not improve from 0.30228
Epoch 15/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2405 – mae: 0.8367 – acc: 0.9312 – val_loss:
0.3189 – val_mae: 0.4282 – val_acc: 0.9527
Epoch 00015: val_mae did not improve from 0.30228
Epoch 16/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2473 – mae: 0.8370 – acc: 0.9256 – val_loss:
0.2798 – val_mae: 0.3983 – val_acc: 0.9543
Epoch 00016: val_mae did not improve from 0.30228
Epoch 17/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2243 – mae: 0.8352 – acc: 0.9283 – val_loss:
0.2235 – val_mae: 0.3361 – val_acc: 0.9543
Epoch 00017: val_mae did not improve from 0.30228
Epoch 18/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2528 – mae: 0.8352 – acc: 0.9316 – val_loss:
0.2056 – val_mae: 0.3251 – val_acc: 0.9527
Epoch 00018: val_mae did not improve from 0.30228
Epoch 19/80
390/390 [==============================] – 8s 20ms/step – loss: 1.2199 – mae: 0.8305 – acc: 0.9266 – val_loss:
0.2282 – val_mae: 0.3369 – val_acc: 0.9566
Epoch 00019: val_mae did not improve from 0.30228
Epoch 20/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2512 – mae: 0.8384 – acc: 0.9312 – val_loss:
0.2212 – val_mae: 0.3366 – val_acc: 0.9634
Epoch 00020: val_mae did not improve from 0.30228
Epoch 21/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2350 – mae: 0.8345 – acc: 0.9247 – val_loss:
0.1981 – val_mae: 0.3117 – val_acc: 0.9520
Epoch 00021: val_mae did not improve from 0.30228
Epoch 22/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2485 – mae: 0.8376 – acc: 0.9315 – val_loss:
0.2794 – val_mae: 0.3964 – val_acc: 0.9573
Epoch 00022: val_mae did not improve from 0.30228
Epoch 23/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1980 – mae: 0.8280 – acc: 0.9246 – val_loss:
0.2172 – val_mae: 0.3288 – val_acc: 0.9543
Epoch 00023: val_mae did not improve from 0.30228
Epoch 24/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2082 – mae: 0.8282 – acc: 0.9321 – val_loss:
0.2367 – val_mae: 0.3548 – val_acc: 0.9527
Epoch 00024: val_mae did not improve from 0.30228
Epoch 25/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2166 – mae: 0.8266 – acc: 0.9273 – val_loss:
0.2190 – val_mae: 0.3376 – val_acc: 0.9497
Epoch 00025: val_mae did not improve from 0.30228
Epoch 26/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2835 – mae: 0.8464 – acc: 0.9271 – val_loss:
0.1933 – val_mae: 0.3112 – val_acc: 0.9474
Epoch 00026: val_mae did not improve from 0.30228
Epoch 27/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2202 – mae: 0.8287 – acc: 0.9289 – val_loss:
0.2088 – val_mae: 0.3249 – val_acc: 0.9550
Epoch 00027: val_mae did not improve from 0.30228
Epoch 28/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2131 – mae: 0.8313 – acc: 0.9299 – val_loss:
0.2757 – val_mae: 0.3892 – val_acc: 0.9543
Epoch 00028: val_mae did not improve from 0.30228
Epoch 29/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2368 – mae: 0.8303 – acc: 0.9260 – val_loss:
0.1895 – val_mae: 0.3039 – val_acc: 0.9428
Epoch 00029: val_mae did not improve from 0.30228
Epoch 30/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2472 – mae: 0.8360 – acc: 0.9290 – val_loss:
0.2617 – val_mae: 0.3896 – val_acc: 0.9527
Epoch 00030: val_mae did not improve from 0.30228
Epoch 31/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2615 – mae: 0.8391 – acc: 0.9316 – val_loss:
0.1934 – val_mae: 0.3203 – val_acc: 0.9527
Epoch 00031: val_mae did not improve from 0.30228
Epoch 32/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1934 – mae: 0.8226 – acc: 0.9280 – val_loss:
0.1969 – val_mae: 0.3210 – val_acc: 0.9505
Epoch 00032: val_mae did not improve from 0.30228
Epoch 33/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2224 – mae: 0.8294 – acc: 0.9327 – val_loss:
0.2527 – val_mae: 0.3721 – val_acc: 0.9527
Epoch 00033: val_mae did not improve from 0.30228
Epoch 34/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1843 – mae: 0.8212 – acc: 0.9297 – val_loss:
0.5109 – val_mae: 0.5746 – val_acc: 0.9581
Epoch 00034: val_mae did not improve from 0.30228
Epoch 35/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2014 – mae: 0.8240 – acc: 0.9312 – val_loss:
0.4305 – val_mae: 0.5179 – val_acc: 0.9566
Epoch 00035: val_mae did not improve from 0.30228
Epoch 36/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1885 – mae: 0.8247 – acc: 0.9300 – val_loss:
0.2108 – val_mae: 0.3303 – val_acc: 0.9604
Epoch 00036: val_mae did not improve from 0.30228
Epoch 37/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2226 – mae: 0.8280 – acc: 0.9280 – val_loss:
0.2608 – val_mae: 0.3730 – val_acc: 0.9566
Epoch 00037: val_mae did not improve from 0.30228
Epoch 38/80
390/390 [==============================] – 8s 22ms/step – loss: 1.2362 – mae: 0.8281 – acc: 0.9293 – val_loss:
0.1968 – val_mae: 0.3134 – val_acc: 0.9535
Epoch 00038: val_mae did not improve from 0.30228
Epoch 39/80
390/390 [==============================] – 9s 22ms/step – loss: 1.1855 – mae: 0.8224 – acc: 0.9292 – val_loss:
0.2335 – val_mae: 0.3528 – val_acc: 0.9581
Epoch 00039: val_mae did not improve from 0.30228
Epoch 40/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2147 – mae: 0.8332 – acc: 0.9287 – val_loss:
0.1986 – val_mae: 0.3129 – val_acc: 0.9550
Epoch 00040: val_mae did not improve from 0.30228
Epoch 41/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1674 – mae: 0.8147 – acc: 0.9299 – val_loss:
0.2478 – val_mae: 0.3606 – val_acc: 0.9512
Epoch 00041: val_mae did not improve from 0.30228
Epoch 42/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2444 – mae: 0.8311 – acc: 0.9274 – val_loss:
0.1790 – val_mae: 0.2886 – val_acc: 0.9573
Epoch 00042: val_mae improved from 0.30228 to 0.28864, saving model to best_model.hdf5
Epoch 43/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1871 – mae: 0.8233 – acc: 0.9274 – val_loss:
0.2364 – val_mae: 0.3502 – val_acc: 0.9543
Epoch 00043: val_mae did not improve from 0.28864
Epoch 44/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1843 – mae: 0.8182 – acc: 0.9311 – val_loss:
0.1817 – val_mae: 0.2896 – val_acc: 0.9566
Epoch 00044: val_mae did not improve from 0.28864
Epoch 45/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2045 – mae: 0.8217 – acc: 0.9322 – val_loss:
0.3202 – val_mae: 0.4295 – val_acc: 0.9466
Epoch 00045: val_mae did not improve from 0.28864
Epoch 46/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1781 – mae: 0.8157 – acc: 0.9264 – val_loss:
0.2208 – val_mae: 0.3374 – val_acc: 0.9558
Epoch 00046: val_mae did not improve from 0.28864
Epoch 47/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1757 – mae: 0.8162 – acc: 0.9288 – val_loss:
0.1945 – val_mae: 0.3057 – val_acc: 0.9428
Epoch 00047: val_mae did not improve from 0.28864
Epoch 48/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1877 – mae: 0.8164 – acc: 0.9271 – val_loss:
0.4179 – val_mae: 0.4970 – val_acc: 0.9497
Epoch 00048: val_mae did not improve from 0.28864
Epoch 49/80
390/390 [==============================] – 8s 22ms/step – loss: 1.2007 – mae: 0.8266 – acc: 0.9294 – val_loss:
0.1975 – val_mae: 0.3077 – val_acc: 0.9558
Epoch 00049: val_mae did not improve from 0.28864
Epoch 50/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1670 – mae: 0.8193 – acc: 0.9287 – val_loss:
0.2732 – val_mae: 0.3866 – val_acc: 0.9527
Epoch 00050: val_mae did not improve from 0.28864
Epoch 51/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1882 – mae: 0.8152 – acc: 0.9258 – val_loss:
0.1769 – val_mae: 0.2903 – val_acc: 0.9558
Epoch 00051: val_mae did not improve from 0.28864
Epoch 52/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2047 – mae: 0.8252 – acc: 0.9292 – val_loss:
0.2303 – val_mae: 0.3348 – val_acc: 0.9505
Epoch 00052: val_mae did not improve from 0.28864
Epoch 53/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1654 – mae: 0.8111 – acc: 0.9319 – val_loss:
0.2143 – val_mae: 0.3272 – val_acc: 0.9558
Epoch 00053: val_mae did not improve from 0.28864
Epoch 54/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1687 – mae: 0.8125 – acc: 0.9264 – val_loss:
0.2385 – val_mae: 0.3556 – val_acc: 0.9520
Epoch 00054: val_mae did not improve from 0.28864
Epoch 55/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1410 – mae: 0.8091 – acc: 0.9296 – val_loss:
0.2006 – val_mae: 0.3186 – val_acc: 0.9512
Epoch 00055: val_mae did not improve from 0.28864
Epoch 56/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1873 – mae: 0.8161 – acc: 0.9305 – val_loss:
0.2027 – val_mae: 0.3141 – val_acc: 0.9604
Epoch 00056: val_mae did not improve from 0.28864
Epoch 57/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1561 – mae: 0.8091 – acc: 0.9299 – val_loss:
0.2843 – val_mae: 0.3970 – val_acc: 0.9596
Epoch 00057: val_mae did not improve from 0.28864
Epoch 58/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1862 – mae: 0.8203 – acc: 0.9267 – val_loss:
0.1787 – val_mae: 0.2943 – val_acc: 0.9634
Epoch 00058: val_mae did not improve from 0.28864
Epoch 59/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1647 – mae: 0.8147 – acc: 0.9305 – val_loss:
0.1799 – val_mae: 0.2920 – val_acc: 0.9642
Epoch 00059: val_mae did not improve from 0.28864
Epoch 60/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1694 – mae: 0.8167 – acc: 0.9296 – val_loss:
0.2037 – val_mae: 0.3117 – val_acc: 0.9596
Epoch 00060: val_mae did not improve from 0.28864
Epoch 61/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1964 – mae: 0.8139 – acc: 0.9270 – val_loss:
0.2930 – val_mae: 0.4172 – val_acc: 0.9604
Epoch 00061: val_mae did not improve from 0.28864
Epoch 62/80
390/390 [==============================] – 9s 22ms/step – loss: 1.1472 – mae: 0.8079 – acc: 0.9308 – val_loss:
0.4413 – val_mae: 0.5314 – val_acc: 0.9611
Epoch 00062: val_mae did not improve from 0.28864
Epoch 63/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1963 – mae: 0.8136 – acc: 0.9313 – val_loss:
0.2041 – val_mae: 0.3179 – val_acc: 0.9535
Epoch 00063: val_mae did not improve from 0.28864
Epoch 64/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1574 – mae: 0.8116 – acc: 0.9310 – val_loss:
0.3462 – val_mae: 0.4525 – val_acc: 0.9588
Epoch 00064: val_mae did not improve from 0.28864
Epoch 65/80
390/390 [==============================] – 8s 21ms/step – loss: 1.2000 – mae: 0.8119 – acc: 0.9294 – val_loss:
0.1856 – val_mae: 0.3026 – val_acc: 0.9482
Epoch 00065: val_mae did not improve from 0.28864
Epoch 66/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1441 – mae: 0.8068 – acc: 0.9317 – val_loss:
0.2221 – val_mae: 0.3373 – val_acc: 0.9451
Epoch 00066: val_mae did not improve from 0.28864
Epoch 67/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1804 – mae: 0.8112 – acc: 0.9302 – val_loss:
0.2182 – val_mae: 0.3324 – val_acc: 0.9573
Epoch 00067: val_mae did not improve from 0.28864
Epoch 68/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1440 – mae: 0.8086 – acc: 0.9284 – val_loss:
0.1991 – val_mae: 0.2984 – val_acc: 0.9482
Epoch 00068: val_mae did not improve from 0.28864
Epoch 69/80
390/390 [==============================] – 8s 22ms/step – loss: 1.2539 – mae: 0.8214 – acc: 0.9315 – val_loss:
0.1898 – val_mae: 0.3032 – val_acc: 0.9627
Epoch 00069: val_mae did not improve from 0.28864
Epoch 70/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1562 – mae: 0.8054 – acc: 0.9284 – val_loss:
0.1889 – val_mae: 0.2982 – val_acc: 0.9558
Epoch 00070: val_mae did not improve from 0.28864
Epoch 71/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1632 – mae: 0.8026 – acc: 0.9288 – val_loss:
0.2146 – val_mae: 0.3264 – val_acc: 0.9390
Epoch 00071: val_mae did not improve from 0.28864
Epoch 72/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1247 – mae: 0.8011 – acc: 0.9303 – val_loss:
0.2015 – val_mae: 0.3166 – val_acc: 0.9428
Epoch 00072: val_mae did not improve from 0.28864
Epoch 73/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1228 – mae: 0.8007 – acc: 0.9260 – val_loss:
0.2335 – val_mae: 0.3550 – val_acc: 0.9558
Epoch 00073: val_mae did not improve from 0.28864
Epoch 74/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1387 – mae: 0.8022 – acc: 0.9312 – val_loss:
0.2064 – val_mae: 0.3156 – val_acc: 0.9413
Epoch 00074: val_mae did not improve from 0.28864
Epoch 75/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1265 – mae: 0.7984 – acc: 0.9289 – val_loss:
0.2746 – val_mae: 0.3883 – val_acc: 0.9619
Epoch 00075: val_mae did not improve from 0.28864
Epoch 76/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1504 – mae: 0.8032 – acc: 0.9305 – val_loss:
0.2451 – val_mae: 0.3562 – val_acc: 0.9497
Epoch 00076: val_mae did not improve from 0.28864
Epoch 77/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1514 – mae: 0.8051 – acc: 0.9271 – val_loss:
0.2822 – val_mae: 0.3910 – val_acc: 0.9649
Epoch 00077: val_mae did not improve from 0.28864
Epoch 78/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1611 – mae: 0.8042 – acc: 0.9283 – val_loss:
0.1963 – val_mae: 0.3109 – val_acc: 0.9581
Epoch 00078: val_mae did not improve from 0.28864
Epoch 79/80
390/390 [==============================] – 8s 22ms/step – loss: 1.1139 – mae: 0.7968 – acc: 0.9320 – val_loss:
0.4809 – val_mae: 0.5521 – val_acc: 0.9581
Epoch 00079: val_mae did not improve from 0.28864
Epoch 80/80
390/390 [==============================] – 8s 21ms/step – loss: 1.1430 – mae: 0.8053 – acc: 0.9320 – val_loss:
0.1713 – val_mae: 0.2798 – val_acc: 0.9588
Epoch 00080: val_mae improved from 0.28864 to 0.27985, saving model to best_model.hdf5
CPU times: user 11min 42s, sys: 1min 14s, total: 12min 56s
Wall time: 11min 9s


In [20]:

# 履歴を平均絶対誤差にまとめる
try:
    plt.plot(history.history['mae'])
    plt.plot(history.history['val_mae'])
    plt.title('Mean Absolute Error vs Epoch')
    plt.ylabel('Mean Absolute Error')
    plt.xlabel('Epochs')
    plt.legend(['train', 'validation'], loc='upper right')
    plt.show()
    # 履歴を正確にまとめる
    plt.plot(history.history['acc'])
    plt.plot(history.history['val_acc'])
    plt.title('Accuracy vs Epoch')
    plt.ylabel('Accuracy')
    plt.xlabel('Epochs')
    plt.legend(['train', 'validation'], loc='upper left')
    plt.show()
    # Lossの履歴
    plt.plot(history.history['loss'])
    plt.plot(history.history['val_loss'])
    plt.title('Loss vs Epoch')
    plt.ylabel('Loss')
    plt.xlabel('Epochs')
    plt.legend(['train', 'validation'], loc='upper left')
    plt.show()
except:
    print("グラフのプロットに使用するメトリクスの1つが欠落しています! model.compile()の metrics 引数を参照してください。.")

データセットをモデルにフィッティング

In [21]:

%%time
# ModelCheckpointコールバックを修正し、(最適な検証前のモデルではなく)最適なトレーニング前のモデルをディスクに保存するようにしました。
checkpointer = ModelCheckpoint(filepath = 'best_model.hdf5', monitor='mae', verbose=1, save_best_only=True, mode='min')
model.fit(train_images, train_keypoints, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, callbacks=[checkpointer])

Epoch 1/80
410/410 [==============================] – 9s 22ms/step – loss: 1.1263 – mae: 0.7971 – acc: 0.9294
Epoch 00001: mae improved from inf to 0.79713, saving model to best_model.hdf5
Epoch 2/80
410/410 [==============================] – 8s 21ms/step – loss: 1.1233 – mae: 0.7962 – acc: 0.9312
Epoch 00002: mae improved from 0.79713 to 0.79625, saving model to best_model.hdf5
Epoch 3/80
410/410 [==============================] – 9s 21ms/step – loss: 1.1200 – mae: 0.7960 – acc: 0.9295
Epoch 00003: mae improved from 0.79625 to 0.79598, saving model to best_model.hdf5
Epoch 4/80
410/410 [==============================] – 9s 21ms/step – loss: 1.1435 – mae: 0.8001 – acc: 0.9298
Epoch 00004: mae did not improve from 0.79598
Epoch 5/80
410/410 [==============================] – 9s 21ms/step – loss: 1.1273 – mae: 0.7964 – acc: 0.9282
Epoch 00005: mae did not improve from 0.79598
Epoch 6/80
410/410 [==============================] – 9s 21ms/step – loss: 1.1214 – mae: 0.7948 – acc: 0.9313
Epoch 00006: mae improved from 0.79598 to 0.79478, saving model to best_model.hdf5
Epoch 7/80
410/410 [==============================] – 9s 21ms/step – loss: 1.1377 – mae: 0.7987 – acc: 0.9293
Epoch 00007: mae did not improve from 0.79478
Epoch 8/80
410/410 [==============================] – 9s 21ms/step – loss: 1.1162 – mae: 0.7928 – acc: 0.9314
Epoch 00008: mae improved from 0.79478 to 0.79284, saving model to best_model.hdf5
Epoch 9/80
410/410 [==============================] – 8s 21ms/step – loss: 1.1249 – mae: 0.7938 – acc: 0.9295
Epoch 00009: mae did not improve from 0.79284
Epoch 10/80
410/410 [==============================] – 9s 21ms/step – loss: 1.1067 – mae: 0.7885 – acc: 0.9321
Epoch 00010: mae improved from 0.79284 to 0.78855, saving model to best_model.hdf5
Epoch 11/80
410/410 [==============================] – 8s 21ms/step – loss: 1.1122 – mae: 0.7907 – acc: 0.9309
Epoch 00011: mae did not improve from 0.78855
Epoch 12/80
410/410 [==============================] – 8s 21ms/step – loss: 1.1314 – mae: 0.7985 – acc: 0.9302
Epoch 00012: mae did not improve from 0.78855
Epoch 13/80
410/410 [==============================] – 9s 21ms/step – loss: 1.1091 – mae: 0.7899 – acc: 0.9314
Epoch 00013: mae did not improve from 0.78855
Epoch 14/80
410/410 [==============================] – 9s 21ms/step – loss: 1.1140 – mae: 0.7926 – acc: 0.9282
Epoch 00014: mae did not improve from 0.78855
Epoch 15/80
410/410 [==============================] – 9s 21ms/step – loss: 1.1183 – mae: 0.7918 – acc: 0.9303
Epoch 00015: mae did not improve from 0.78855
Epoch 16/80
410/410 [==============================] – 9s 21ms/step – loss: 1.1081 – mae: 0.7884 – acc: 0.9283
Epoch 00016: mae improved from 0.78855 to 0.78844, saving model to best_model.hdf5
Epoch 17/80
410/410 [==============================] – 9s 21ms/step – loss: 1.1023 – mae: 0.7879 – acc: 0.9288
Epoch 00017: mae improved from 0.78844 to 0.78788, saving model to best_model.hdf5
Epoch 18/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0883 – mae: 0.7827 – acc: 0.9297
Epoch 00018: mae improved from 0.78788 to 0.78266, saving model to best_model.hdf5
Epoch 19/80
410/410 [==============================] – 8s 21ms/step – loss: 1.1113 – mae: 0.7888 – acc: 0.9316
Epoch 00019: mae did not improve from 0.78266
Epoch 20/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0968 – mae: 0.7853 – acc: 0.9324
Epoch 00020: mae did not improve from 0.78266
Epoch 21/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0986 – mae: 0.7849 – acc: 0.9300
Epoch 00021: mae did not improve from 0.78266
Epoch 22/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0946 – mae: 0.7849 – acc: 0.9289
Epoch 00022: mae did not improve from 0.78266
Epoch 23/80
410/410 [==============================] – 9s 21ms/step – loss: 1.1036 – mae: 0.7868 – acc: 0.9323
Epoch 00023: mae did not improve from 0.78266
Epoch 24/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0969 – mae: 0.7864 – acc: 0.9305
Epoch 00024: mae did not improve from 0.78266
Epoch 25/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0962 – mae: 0.7854 – acc: 0.9301
Epoch 00025: mae did not improve from 0.78266
Epoch 26/80
410/410 [==============================] – 8s 21ms/step – loss: 1.0999 – mae: 0.7857 – acc: 0.9303
Epoch 00026: mae did not improve from 0.78266
Epoch 27/80
410/410 [==============================] – 8s 21ms/step – loss: 1.0795 – mae: 0.7792 – acc: 0.9325
Epoch 00027: mae improved from 0.78266 to 0.77925, saving model to best_model.hdf5
Epoch 28/80
410/410 [==============================] – 8s 21ms/step – loss: 1.0794 – mae: 0.7800 – acc: 0.9336
Epoch 00028: mae did not improve from 0.77925
Epoch 29/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0711 – mae: 0.7754 – acc: 0.9327
Epoch 00029: mae improved from 0.77925 to 0.77543, saving model to best_model.hdf5
Epoch 30/80
410/410 [==============================] – 8s 21ms/step – loss: 1.0857 – mae: 0.7825 – acc: 0.9299
Epoch 00030: mae did not improve from 0.77543
Epoch 31/80
410/410 [==============================] – 8s 21ms/step – loss: 1.0912 – mae: 0.7819 – acc: 0.9327
Epoch 00031: mae did not improve from 0.77543
Epoch 32/80
410/410 [==============================] – 8s 21ms/step – loss: 1.0900 – mae: 0.7829 – acc: 0.9315
Epoch 00032: mae did not improve from 0.77543
Epoch 33/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0599 – mae: 0.7708 – acc: 0.9312
Epoch 00033: mae improved from 0.77543 to 0.77079, saving model to best_model.hdf5
Epoch 34/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0747 – mae: 0.7742 – acc: 0.9324
Epoch 00034: mae did not improve from 0.77079
Epoch 35/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0706 – mae: 0.7754 – acc: 0.9302
Epoch 00035: mae did not improve from 0.77079
Epoch 36/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0733 – mae: 0.7779 – acc: 0.9331
Epoch 00036: mae did not improve from 0.77079
Epoch 37/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0622 – mae: 0.7728 – acc: 0.9310
Epoch 00037: mae did not improve from 0.77079
Epoch 38/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0577 – mae: 0.7713 – acc: 0.9308
Epoch 00038: mae did not improve from 0.77079
Epoch 39/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0461 – mae: 0.7664 – acc: 0.9322
Epoch 00039: mae improved from 0.77079 to 0.76639, saving model to best_model.hdf5
Epoch 40/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0627 – mae: 0.7747 – acc: 0.9328
Epoch 00040: mae did not improve from 0.76639
Epoch 41/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0627 – mae: 0.7737 – acc: 0.9304
Epoch 00041: mae did not improve from 0.76639
Epoch 42/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0514 – mae: 0.7684 – acc: 0.9310
Epoch 00042: mae did not improve from 0.76639
Epoch 43/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0448 – mae: 0.7663 – acc: 0.9320
Epoch 00043: mae improved from 0.76639 to 0.76632, saving model to best_model.hdf5
Epoch 44/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0515 – mae: 0.7692 – acc: 0.9316
Epoch 00044: mae did not improve from 0.76632
Epoch 45/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0504 – mae: 0.7658 – acc: 0.9335
Epoch 00045: mae improved from 0.76632 to 0.76577, saving model to best_model.hdf5
Epoch 46/80
410/410 [==============================] – 8s 21ms/step – loss: 1.0562 – mae: 0.7681 – acc: 0.9326
Epoch 00046: mae did not improve from 0.76577
Epoch 47/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0360 – mae: 0.7626 – acc: 0.9332
Epoch 00047: mae improved from 0.76577 to 0.76260, saving model to best_model.hdf5
Epoch 48/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0563 – mae: 0.7677 – acc: 0.9313
Epoch 00048: mae did not improve from 0.76260
Epoch 49/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0305 – mae: 0.7619 – acc: 0.9317
Epoch 00049: mae improved from 0.76260 to 0.76190, saving model to best_model.hdf5
Epoch 50/80
410/410 [==============================] – 8s 21ms/step – loss: 1.0535 – mae: 0.7686 – acc: 0.9337
Epoch 00050: mae did not improve from 0.76190
Epoch 51/80
410/410 [==============================] – 8s 21ms/step – loss: 1.0140 – mae: 0.7549 – acc: 0.9315
Epoch 00051: mae improved from 0.76190 to 0.75490, saving model to best_model.hdf5
Epoch 52/80
410/410 [==============================] – 8s 21ms/step – loss: 1.0414 – mae: 0.7646 – acc: 0.9331
Epoch 00052: mae did not improve from 0.75490
Epoch 53/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0211 – mae: 0.7562 – acc: 0.9318
Epoch 00053: mae did not improve from 0.75490
Epoch 54/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0336 – mae: 0.7620 – acc: 0.9328
Epoch 00054: mae did not improve from 0.75490
Epoch 55/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0235 – mae: 0.7589 – acc: 0.9343
Epoch 00055: mae did not improve from 0.75490
Epoch 56/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0349 – mae: 0.7618 – acc: 0.9329
Epoch 00056: mae did not improve from 0.75490
Epoch 57/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0300 – mae: 0.7599 – acc: 0.9319
Epoch 00057: mae did not improve from 0.75490
Epoch 58/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0178 – mae: 0.7569 – acc: 0.9340
Epoch 00058: mae did not improve from 0.75490
Epoch 59/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0074 – mae: 0.7530 – acc: 0.9324
Epoch 00059: mae improved from 0.75490 to 0.75300, saving model to best_model.hdf5
Epoch 60/80
410/410 [==============================] – 8s 21ms/step – loss: 1.0217 – mae: 0.7557 – acc: 0.9310
Epoch 00060: mae did not improve from 0.75300
Epoch 61/80
410/410 [==============================] – 8s 21ms/step – loss: 1.0356 – mae: 0.7613 – acc: 0.9328
Epoch 00061: mae did not improve from 0.75300
Epoch 62/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0084 – mae: 0.7520 – acc: 0.9318
Epoch 00062: mae improved from 0.75300 to 0.75201, saving model to best_model.hdf5
Epoch 63/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0124 – mae: 0.7522 – acc: 0.9337
Epoch 00063: mae did not improve from 0.75201
Epoch 64/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0105 – mae: 0.7524 – acc: 0.9307
Epoch 00064: mae did not improve from 0.75201
Epoch 65/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0036 – mae: 0.7507 – acc: 0.9314
Epoch 00065: mae improved from 0.75201 to 0.75065, saving model to best_model.hdf5
Epoch 66/80
410/410 [==============================] – 8s 21ms/step – loss: 1.0068 – mae: 0.7513 – acc: 0.9333
Epoch 00066: mae did not improve from 0.75065
Epoch 67/80
410/410 [==============================] – 8s 21ms/step – loss: 1.0120 – mae: 0.7545 – acc: 0.9325
Epoch 00067: mae did not improve from 0.75065
Epoch 68/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0078 – mae: 0.7524 – acc: 0.9346
Epoch 00068: mae did not improve from 0.75065
Epoch 69/80
410/410 [==============================] – 9s 21ms/step – loss: 1.0056 – mae: 0.7496 – acc: 0.9351
Epoch 00069: mae improved from 0.75065 to 0.74965, saving model to best_model.hdf5
Epoch 70/80
410/410 [==============================] – 8s 21ms/step – loss: 0.9982 – mae: 0.7488 – acc: 0.9320
Epoch 00070: mae improved from 0.74965 to 0.74883, saving model to best_model.hdf5
Epoch 71/80
410/410 [==============================] – 8s 21ms/step – loss: 1.0015 – mae: 0.7503 – acc: 0.9323
Epoch 00071: mae did not improve from 0.74883
Epoch 72/80
410/410 [==============================] – 9s 21ms/step – loss: 0.9943 – mae: 0.7482 – acc: 0.9332
Epoch 00072: mae improved from 0.74883 to 0.74820, saving model to best_model.hdf5
Epoch 73/80
410/410 [==============================] – 8s 21ms/step – loss: 0.9926 – mae: 0.7468 – acc: 0.9327
Epoch 00073: mae improved from 0.74820 to 0.74678, saving model to best_model.hdf5
Epoch 74/80
410/410 [==============================] – 8s 21ms/step – loss: 1.0037 – mae: 0.7500 – acc: 0.9326
Epoch 00074: mae did not improve from 0.74678
Epoch 75/80
410/410 [==============================] – 9s 21ms/step – loss: 0.9889 – mae: 0.7456 – acc: 0.9323
Epoch 00075: mae improved from 0.74678 to 0.74558, saving model to best_model.hdf5
Epoch 76/80
410/410 [==============================] – 8s 21ms/step – loss: 0.9955 – mae: 0.7485 – acc: 0.9339
Epoch 00076: mae did not improve from 0.74558
Epoch 77/80
410/410 [==============================] – 9s 21ms/step – loss: 0.9763 – mae: 0.7421 – acc: 0.9331
Epoch 00077: mae improved from 0.74558 to 0.74215, saving model to best_model.hdf5
Epoch 78/80
410/410 [==============================] – 8s 21ms/step – loss: 0.9925 – mae: 0.7470 – acc: 0.9338
Epoch 00078: mae did not improve from 0.74215
Epoch 79/80
410/410 [==============================] – 9s 21ms/step – loss: 0.9929 – mae: 0.7460 – acc: 0.9311
Epoch 00079: mae did not improve from 0.74215
Epoch 80/80
410/410 [==============================] – 9s 21ms/step – loss: 0.9796 – mae: 0.7419 – acc: 0.9339
Epoch 00080: mae improved from 0.74215 to 0.74187, saving model to best_model.hdf5
CPU times: user 11min 59s, sys: 1min 22s, total: 13min 22s
Wall time: 11min 30s


Out[21]:

<tensorflow.python.keras.callbacks.History at 0x7f16b90998d0>

テストセットを利用した予測結果

In [22]:

%%time

model = load_model('best_model.hdf5')
test_preds = model.predict(test_images)

CPU times: user 994 ms, sys: 117 ms, total: 1.11 s
Wall time: 1.07 s

予測結果の可視化

In [23]:

fig = plt.figure(figsize=(20,16))
for i in range(20):
    axis = fig.add_subplot(4, 5, i+1, xticks=[], yticks=[])
    plot_sample(test_images[i], test_preds[i], axis, "")
plt.show()

参考:

参考記事1:Data Augmentation for Facial Keypoint Detection

参考記事2:Kaggle Facial Keypoints DetectionをKerasで実装する

コメントを残す

メールアドレスが公開されることはありません。 が付いている欄は必須項目です

技術コラム

GPUSOROBAN上でブロックチェインにスマートコントラクトをデプロイしてみよう(2)

Contents はじめに ブロックチェインにスマートコントラクトをデプロイするにはTruffleとGanacheを使うことで、実現することができます。Truffleとはブロックチェインを開発するにはよく使われる一つのフ […]

Read More
おすすめ 技術コラム

GPUSOROBAN上でブロックチェインにスマートコントラクトをデプロイしてみよう(1)

Contents はじめに ブロックチェインにスマートコントラクトをデプロイするにはTruffleとGanacheを使うことで、実現することができます。Truffleとはブロックチェインを開発するにはよく使われる一つのフ […]

Read More
おすすめ 製品サービス紹介

スマホで2枚撮影するだけで自動採寸可能な「スマホはかる君」を開発

ダイビングブランド「MOBBY’S」を展開するモビーディックは、スマホで2枚撮影するだけで全身のサイズを自動計測できる「自動採寸システムスマホはかる君」を日米共同開発したと発表しました。 Edge POINT! スマホで […]

Read More