{ "cells": [ { "cell_type": "markdown", "id": "7915f17e", "metadata": {}, "source": [ "#### Инициализация Keras\n", "\n", "Для ускорения обучения на GPU следует настраивать backend под конкретную ОС и модель GPU.\n", "\n", "Для ускорения pytorch на Windows и свежей карте от NVidia следует установить вместо обычного pytorch:\n", "```\n", "torch = { version = \"^2.7.0+cu128\", source = \"pytorch-cuda128\" }\n", "torchaudio = { version = \"^2.7.0+cu128\", source = \"pytorch-cuda128\" }\n", "torchvision = { version = \"^0.22.0+cu128\", source = \"pytorch-cuda128\" }\n", "```\n", "\n", "Обязательно следует включить репозиторий\n", "```\n", "[[tool.poetry.source]]\n", "name = \"pytorch-cuda128\"\n", "url = \"https://download.pytorch.org/whl/cu128\"\n", "priority = \"explicit\"\n", "```\n", "\n", "Для macOS можно использовать jax 0.5.0 (обязательно такая версия) + jax-metal 0.1.1" ] }, { "cell_type": "code", "execution_count": 1, "id": "560de685", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "3.9.2\n" ] } ], "source": [ "import os\n", "\n", "os.environ[\"KERAS_BACKEND\"] = \"torch\"\n", "import keras\n", "\n", "print(keras.__version__)" ] }, { "cell_type": "markdown", "id": "27d07c7a", "metadata": {}, "source": [ "#### Загрузка набора данных для задачи классификации\n", "\n", "В данном примере используется фрагмент набора данных Cats and Dogs Classification Dataset\n", "\n", "В наборе данных два класса (всего 24 998 изображений): кошки (12 499 изображения) и собаки (12 499 изображения)\n", "\n", "Ссылка: https://www.kaggle.com/datasets/bhavikjikadara/dog-and-cat-classification-dataset" ] }, { "cell_type": "code", "execution_count": 2, "id": "24dd788e", "metadata": {}, "outputs": [], "source": [ "import kagglehub\n", "import os\n", "\n", "path = kagglehub.dataset_download(\"bhavikjikadara/dog-and-cat-classification-dataset\")\n", "path = os.path.join(path, \"PetImages\")" ] }, { "cell_type": "markdown", "id": "85652835", "metadata": {}, "source": [ "#### Формирование выборок\n", "\n", "Для формирования выборок используется устаревший (deprecated) класс ImageDataGenerator\n", "\n", "Вместо него рекомендуется использовать image_dataset_from_directory (https://keras.io/api/data_loading/image/)\n", "\n", "Для использования image_dataset_from_directory требуется tensorflow\n", "\n", "ImageDataGenerator формирует две выборки: обучающую и валидационную (80 на 20).\n", "\n", "В каждой выборке изображения масштабируются до размера 224 на 224 пиксела с RGB пространством.\n", "\n", "Изображения подгружаются с диска в процессе обучения и валидации модели." ] }, { "cell_type": "code", "execution_count": 3, "id": "f68de944", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Found 20000 images belonging to 2 classes.\n", "Found 4998 images belonging to 2 classes.\n" ] }, { "data": { "text/plain": [ "{'Cat': 0, 'Dog': 1}" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from keras.src.legacy.preprocessing.image import ImageDataGenerator\n", "\n", "batch_size = 64\n", "\n", "data_loader = ImageDataGenerator(validation_split=0.2)\n", "\n", "train = data_loader.flow_from_directory(\n", " directory=path,\n", " target_size=(224, 224),\n", " color_mode=\"rgb\",\n", " class_mode=\"binary\",\n", " batch_size=batch_size,\n", " shuffle=True,\n", " seed=9,\n", " subset=\"training\",\n", ")\n", "\n", "valid = data_loader.flow_from_directory(\n", " directory=path,\n", " target_size=(224, 224),\n", " color_mode=\"rgb\",\n", " class_mode=\"binary\",\n", " batch_size=batch_size,\n", " shuffle=True,\n", " seed=9,\n", " subset=\"validation\",\n", ")\n", "\n", "train.class_indices" ] }, { "cell_type": "markdown", "id": "bfb9434d", "metadata": {}, "source": [ "### Архитектура AlexNet\n", "\n", "Модель AlexNet описана в лекции про глубокое обучение" ] }, { "cell_type": "markdown", "id": "3250d20b", "metadata": {}, "source": [ "#### Проектирование архитектуры AlexNet" ] }, { "cell_type": "code", "execution_count": 4, "id": "904b01b0", "metadata": {}, "outputs": [ { "data": { "text/html": [ "
Model: \"sequential\"\n",
"
\n"
],
"text/plain": [
"\u001b[1mModel: \"sequential\"\u001b[0m\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n", "┃ Layer (type) ┃ Output Shape ┃ Param # ┃\n", "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n", "│ conv2d (Conv2D) │ (None, 54, 54, 96) │ 34,944 │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ max_pooling2d (MaxPooling2D) │ (None, 26, 26, 96) │ 0 │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ batch_normalization │ (None, 26, 26, 96) │ 384 │\n", "│ (BatchNormalization) │ │ │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ conv2d_1 (Conv2D) │ (None, 22, 22, 256) │ 614,656 │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ max_pooling2d_1 (MaxPooling2D) │ (None, 10, 10, 256) │ 0 │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ batch_normalization_1 │ (None, 10, 10, 256) │ 1,024 │\n", "│ (BatchNormalization) │ │ │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ conv2d_2 (Conv2D) │ (None, 8, 8, 256) │ 590,080 │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ conv2d_3 (Conv2D) │ (None, 6, 6, 384) │ 885,120 │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ conv2d_4 (Conv2D) │ (None, 4, 4, 384) │ 1,327,488 │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ max_pooling2d_2 (MaxPooling2D) │ (None, 1, 1, 384) │ 0 │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ batch_normalization_2 │ (None, 1, 1, 384) │ 1,536 │\n", "│ (BatchNormalization) │ │ │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ flatten (Flatten) │ (None, 384) │ 0 │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ dense (Dense) │ (None, 4096) │ 1,576,960 │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ dropout (Dropout) │ (None, 4096) │ 0 │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ dense_1 (Dense) │ (None, 4096) │ 16,781,312 │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ dropout_1 (Dropout) │ (None, 4096) │ 0 │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ dense_2 (Dense) │ (None, 1) │ 4,097 │\n", "└─────────────────────────────────┴────────────────────────┴───────────────┘\n", "\n" ], "text/plain": [ "┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n", "┃\u001b[1m \u001b[0m\u001b[1mLayer (type) \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mOutput Shape \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m Param #\u001b[0m\u001b[1m \u001b[0m┃\n", "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n", "│ conv2d (\u001b[38;5;33mConv2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m54\u001b[0m, \u001b[38;5;34m54\u001b[0m, \u001b[38;5;34m96\u001b[0m) │ \u001b[38;5;34m34,944\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ max_pooling2d (\u001b[38;5;33mMaxPooling2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m26\u001b[0m, \u001b[38;5;34m26\u001b[0m, \u001b[38;5;34m96\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ batch_normalization │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m26\u001b[0m, \u001b[38;5;34m26\u001b[0m, \u001b[38;5;34m96\u001b[0m) │ \u001b[38;5;34m384\u001b[0m │\n", "│ (\u001b[38;5;33mBatchNormalization\u001b[0m) │ │ │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ conv2d_1 (\u001b[38;5;33mConv2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m22\u001b[0m, \u001b[38;5;34m22\u001b[0m, \u001b[38;5;34m256\u001b[0m) │ \u001b[38;5;34m614,656\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ max_pooling2d_1 (\u001b[38;5;33mMaxPooling2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m10\u001b[0m, \u001b[38;5;34m10\u001b[0m, \u001b[38;5;34m256\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ batch_normalization_1 │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m10\u001b[0m, \u001b[38;5;34m10\u001b[0m, \u001b[38;5;34m256\u001b[0m) │ \u001b[38;5;34m1,024\u001b[0m │\n", "│ (\u001b[38;5;33mBatchNormalization\u001b[0m) │ │ │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ conv2d_2 (\u001b[38;5;33mConv2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m8\u001b[0m, \u001b[38;5;34m8\u001b[0m, \u001b[38;5;34m256\u001b[0m) │ \u001b[38;5;34m590,080\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ conv2d_3 (\u001b[38;5;33mConv2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m6\u001b[0m, \u001b[38;5;34m6\u001b[0m, \u001b[38;5;34m384\u001b[0m) │ \u001b[38;5;34m885,120\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ conv2d_4 (\u001b[38;5;33mConv2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m4\u001b[0m, \u001b[38;5;34m4\u001b[0m, \u001b[38;5;34m384\u001b[0m) │ \u001b[38;5;34m1,327,488\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ max_pooling2d_2 (\u001b[38;5;33mMaxPooling2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m1\u001b[0m, \u001b[38;5;34m1\u001b[0m, \u001b[38;5;34m384\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ batch_normalization_2 │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m1\u001b[0m, \u001b[38;5;34m1\u001b[0m, \u001b[38;5;34m384\u001b[0m) │ \u001b[38;5;34m1,536\u001b[0m │\n", "│ (\u001b[38;5;33mBatchNormalization\u001b[0m) │ │ │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ flatten (\u001b[38;5;33mFlatten\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m384\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ dense (\u001b[38;5;33mDense\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m4096\u001b[0m) │ \u001b[38;5;34m1,576,960\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ dropout (\u001b[38;5;33mDropout\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m4096\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ dense_1 (\u001b[38;5;33mDense\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m4096\u001b[0m) │ \u001b[38;5;34m16,781,312\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ dropout_1 (\u001b[38;5;33mDropout\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m4096\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ dense_2 (\u001b[38;5;33mDense\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m1\u001b[0m) │ \u001b[38;5;34m4,097\u001b[0m │\n", "└─────────────────────────────────┴────────────────────────┴───────────────┘\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
Total params: 21,817,601 (83.23 MB)\n", "\n" ], "text/plain": [ "\u001b[1m Total params: \u001b[0m\u001b[38;5;34m21,817,601\u001b[0m (83.23 MB)\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
Trainable params: 21,816,129 (83.22 MB)\n", "\n" ], "text/plain": [ "\u001b[1m Trainable params: \u001b[0m\u001b[38;5;34m21,816,129\u001b[0m (83.22 MB)\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
Non-trainable params: 1,472 (5.75 KB)\n", "\n" ], "text/plain": [ "\u001b[1m Non-trainable params: \u001b[0m\u001b[38;5;34m1,472\u001b[0m (5.75 KB)\n" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "from keras.api.models import Sequential\n", "from keras.api.layers import InputLayer, Conv2D, MaxPooling2D, Dropout, Flatten, Dense, BatchNormalization\n", "\n", "alexnet_model = Sequential()\n", "\n", "# Входной слой\n", "alexnet_model.add(InputLayer(shape=(224, 224, 3)))\n", "\n", "# Первый скрытый слой\n", "alexnet_model.add(Conv2D(96, kernel_size=(11, 11), strides=(4, 4), activation=\"relu\"))\n", "alexnet_model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))\n", "alexnet_model.add(BatchNormalization())\n", "\n", "# Второй скрытый слой\n", "alexnet_model.add(Conv2D(256, kernel_size=(5, 5), activation=\"relu\"))\n", "alexnet_model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))\n", "alexnet_model.add(BatchNormalization())\n", "\n", "# Третий скрытый слой\n", "alexnet_model.add(Conv2D(256, kernel_size=(3, 3), activation=\"relu\"))\n", "\n", "# Четвертый скрытый слой\n", "alexnet_model.add(Conv2D(384, kernel_size=(3, 3), activation=\"relu\"))\n", "\n", "# Пятый скрытый слой\n", "alexnet_model.add(Conv2D(384, kernel_size=(3, 3), activation=\"relu\"))\n", "alexnet_model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))\n", "alexnet_model.add(BatchNormalization())\n", "\n", "# Шестой скрытый слой\n", "alexnet_model.add(Flatten())\n", "alexnet_model.add(Dense(4096, activation=\"tanh\"))\n", "alexnet_model.add(Dropout(0.5))\n", "\n", "# Седьмой скрытый слой\n", "alexnet_model.add(Dense(4096, activation=\"tanh\"))\n", "alexnet_model.add(Dropout(0.5))\n", "\n", "# Выходной слой\n", "alexnet_model.add(Dense(1, activation=\"sigmoid\"))\n", "\n", "alexnet_model.summary()" ] }, { "cell_type": "markdown", "id": "49fceead", "metadata": {}, "source": [ "#### Обучение глубокой модели" ] }, { "cell_type": "code", "execution_count": 5, "id": "fe650631", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "d:\\Projects\\Python\\mai\\.venv\\Lib\\site-packages\\keras\\src\\trainers\\data_adapters\\py_dataset_adapter.py:121: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored.\n", " self._warn_if_super_not_called()\n", "d:\\Projects\\Python\\mai\\.venv\\Lib\\site-packages\\PIL\\TiffImagePlugin.py:900: UserWarning: Truncated File Read\n", " warnings.warn(str(msg))\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 1/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m93s\u001b[0m 295ms/step - accuracy: 0.5094 - loss: 1.5595 - val_accuracy: 0.5290 - val_loss: 0.7144\n", "Epoch 2/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m92s\u001b[0m 294ms/step - accuracy: 0.5305 - loss: 0.7776 - val_accuracy: 0.5314 - val_loss: 0.7015\n", "Epoch 3/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m92s\u001b[0m 294ms/step - accuracy: 0.5392 - loss: 0.7418 - val_accuracy: 0.5136 - val_loss: 0.7653\n", "Epoch 4/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m90s\u001b[0m 288ms/step - accuracy: 0.5461 - loss: 0.7339 - val_accuracy: 0.5676 - val_loss: 0.6940\n", "Epoch 5/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m97s\u001b[0m 310ms/step - accuracy: 0.5631 - loss: 0.7349 - val_accuracy: 0.4854 - val_loss: 0.7876\n", "Epoch 6/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m97s\u001b[0m 309ms/step - accuracy: 0.5519 - loss: 0.7588 - val_accuracy: 0.5784 - val_loss: 0.7633\n", "Epoch 7/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m92s\u001b[0m 293ms/step - accuracy: 0.5918 - loss: 0.6969 - val_accuracy: 0.5990 - val_loss: 0.6865\n", "Epoch 8/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.6017 - loss: 0.6950 - val_accuracy: 0.5470 - val_loss: 0.7832\n", "Epoch 9/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 280ms/step - accuracy: 0.5869 - loss: 0.7124 - val_accuracy: 0.5500 - val_loss: 0.7952\n", "Epoch 10/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m86s\u001b[0m 276ms/step - accuracy: 0.5894 - loss: 0.7112 - val_accuracy: 0.6182 - val_loss: 0.7114\n", "Epoch 11/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m93s\u001b[0m 296ms/step - accuracy: 0.5923 - loss: 0.7114 - val_accuracy: 0.5674 - val_loss: 0.7310\n", "Epoch 12/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m97s\u001b[0m 310ms/step - accuracy: 0.6261 - loss: 0.6881 - val_accuracy: 0.5842 - val_loss: 0.7458\n", "Epoch 13/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m98s\u001b[0m 312ms/step - accuracy: 0.6293 - loss: 0.6767 - val_accuracy: 0.5020 - val_loss: 0.9032\n", "Epoch 14/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m95s\u001b[0m 305ms/step - accuracy: 0.6181 - loss: 0.6952 - val_accuracy: 0.6417 - val_loss: 0.6388\n", "Epoch 15/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m93s\u001b[0m 299ms/step - accuracy: 0.6178 - loss: 0.6890 - val_accuracy: 0.5462 - val_loss: 0.7037\n", "Epoch 16/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m99s\u001b[0m 315ms/step - accuracy: 0.5895 - loss: 0.7066 - val_accuracy: 0.6188 - val_loss: 0.7259\n", "Epoch 17/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m102s\u001b[0m 325ms/step - accuracy: 0.6549 - loss: 0.6548 - val_accuracy: 0.5402 - val_loss: 0.8502\n", "Epoch 18/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m99s\u001b[0m 315ms/step - accuracy: 0.6935 - loss: 0.6069 - val_accuracy: 0.5174 - val_loss: 1.1147\n", "Epoch 19/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m89s\u001b[0m 286ms/step - accuracy: 0.7211 - loss: 0.5727 - val_accuracy: 0.7047 - val_loss: 0.6050\n", "Epoch 20/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m86s\u001b[0m 276ms/step - accuracy: 0.7356 - loss: 0.5669 - val_accuracy: 0.6951 - val_loss: 0.5860\n", "Epoch 21/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m88s\u001b[0m 281ms/step - accuracy: 0.7518 - loss: 0.5439 - val_accuracy: 0.7709 - val_loss: 0.4893\n", "Epoch 22/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.7632 - loss: 0.5239 - val_accuracy: 0.7467 - val_loss: 0.5864\n", "Epoch 23/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 279ms/step - accuracy: 0.7727 - loss: 0.5033 - val_accuracy: 0.7751 - val_loss: 0.4713\n", "Epoch 24/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 279ms/step - accuracy: 0.7841 - loss: 0.4682 - val_accuracy: 0.7643 - val_loss: 0.5510\n", "Epoch 25/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m86s\u001b[0m 274ms/step - accuracy: 0.7872 - loss: 0.4699 - val_accuracy: 0.5776 - val_loss: 1.0140\n", "Epoch 26/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m85s\u001b[0m 273ms/step - accuracy: 0.7962 - loss: 0.4578 - val_accuracy: 0.6791 - val_loss: 0.6313\n", "Epoch 27/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m86s\u001b[0m 275ms/step - accuracy: 0.8093 - loss: 0.4240 - val_accuracy: 0.6463 - val_loss: 0.8024\n", "Epoch 28/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m91s\u001b[0m 291ms/step - accuracy: 0.8099 - loss: 0.4352 - val_accuracy: 0.7421 - val_loss: 0.5641\n", "Epoch 29/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m92s\u001b[0m 293ms/step - accuracy: 0.8185 - loss: 0.4183 - val_accuracy: 0.7937 - val_loss: 0.4554\n", "Epoch 30/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m93s\u001b[0m 296ms/step - accuracy: 0.8300 - loss: 0.3931 - val_accuracy: 0.7837 - val_loss: 0.4655\n", "Epoch 31/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m107s\u001b[0m 342ms/step - accuracy: 0.8468 - loss: 0.3578 - val_accuracy: 0.7977 - val_loss: 0.5012\n", "Epoch 32/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m116s\u001b[0m 372ms/step - accuracy: 0.8535 - loss: 0.3602 - val_accuracy: 0.7783 - val_loss: 0.5194\n", "Epoch 33/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m116s\u001b[0m 371ms/step - accuracy: 0.8608 - loss: 0.3326 - val_accuracy: 0.7873 - val_loss: 0.4888\n", "Epoch 34/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m111s\u001b[0m 356ms/step - accuracy: 0.8580 - loss: 0.3339 - val_accuracy: 0.7375 - val_loss: 0.6566\n", "Epoch 35/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m111s\u001b[0m 354ms/step - accuracy: 0.8615 - loss: 0.3329 - val_accuracy: 0.8181 - val_loss: 0.4174\n", "Epoch 36/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m110s\u001b[0m 350ms/step - accuracy: 0.8610 - loss: 0.3358 - val_accuracy: 0.6757 - val_loss: 0.9422\n", "Epoch 37/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m101s\u001b[0m 324ms/step - accuracy: 0.8593 - loss: 0.3543 - val_accuracy: 0.8081 - val_loss: 0.5241\n", "Epoch 38/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m101s\u001b[0m 323ms/step - accuracy: 0.8805 - loss: 0.3017 - val_accuracy: 0.8401 - val_loss: 0.3856\n", "Epoch 39/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m101s\u001b[0m 324ms/step - accuracy: 0.8928 - loss: 0.2749 - val_accuracy: 0.7851 - val_loss: 0.4438\n", "Epoch 40/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m105s\u001b[0m 337ms/step - accuracy: 0.8995 - loss: 0.2546 - val_accuracy: 0.8591 - val_loss: 0.3600\n", "Epoch 41/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m106s\u001b[0m 338ms/step - accuracy: 0.9109 - loss: 0.2269 - val_accuracy: 0.7961 - val_loss: 0.5176\n", "Epoch 42/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m106s\u001b[0m 340ms/step - accuracy: 0.9057 - loss: 0.2371 - val_accuracy: 0.8575 - val_loss: 0.3894\n", "Epoch 43/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m106s\u001b[0m 340ms/step - accuracy: 0.9111 - loss: 0.2292 - val_accuracy: 0.8493 - val_loss: 0.4270\n", "Epoch 44/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m106s\u001b[0m 339ms/step - accuracy: 0.9188 - loss: 0.2122 - val_accuracy: 0.8497 - val_loss: 0.4038\n", "Epoch 45/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m106s\u001b[0m 338ms/step - accuracy: 0.9249 - loss: 0.1933 - val_accuracy: 0.7949 - val_loss: 0.5533\n", "Epoch 46/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m97s\u001b[0m 310ms/step - accuracy: 0.9347 - loss: 0.1671 - val_accuracy: 0.7715 - val_loss: 0.8307\n", "Epoch 47/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 278ms/step - accuracy: 0.9231 - loss: 0.2009 - val_accuracy: 0.7877 - val_loss: 0.7301\n", "Epoch 48/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.9295 - loss: 0.1965 - val_accuracy: 0.8457 - val_loss: 0.5038\n", "Epoch 49/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m90s\u001b[0m 289ms/step - accuracy: 0.9285 - loss: 0.1886 - val_accuracy: 0.8737 - val_loss: 0.4602\n", "Epoch 50/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m90s\u001b[0m 288ms/step - accuracy: 0.9429 - loss: 0.1447 - val_accuracy: 0.8281 - val_loss: 0.4814\n", "Epoch 51/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 276ms/step - accuracy: 0.9410 - loss: 0.1527 - val_accuracy: 0.8800 - val_loss: 0.3787\n", "Epoch 52/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.9284 - loss: 0.1853 - val_accuracy: 0.7073 - val_loss: 0.8980\n", "Epoch 53/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 278ms/step - accuracy: 0.8610 - loss: 0.3486 - val_accuracy: 0.8417 - val_loss: 0.4740\n", "Epoch 54/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 279ms/step - accuracy: 0.9170 - loss: 0.2164 - val_accuracy: 0.8693 - val_loss: 0.4258\n", "Epoch 55/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.9350 - loss: 0.1721 - val_accuracy: 0.8671 - val_loss: 0.3911\n", "Epoch 56/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 276ms/step - accuracy: 0.9512 - loss: 0.1330 - val_accuracy: 0.8513 - val_loss: 0.4823\n", "Epoch 57/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 278ms/step - accuracy: 0.9433 - loss: 0.1466 - val_accuracy: 0.8745 - val_loss: 0.4241\n", "Epoch 58/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 278ms/step - accuracy: 0.9478 - loss: 0.1368 - val_accuracy: 0.8768 - val_loss: 0.3645\n", "Epoch 59/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.9453 - loss: 0.1556 - val_accuracy: 0.8427 - val_loss: 0.4757\n", "Epoch 60/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.9205 - loss: 0.2193 - val_accuracy: 0.8575 - val_loss: 0.4185\n", "Epoch 61/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 278ms/step - accuracy: 0.9466 - loss: 0.1508 - val_accuracy: 0.8760 - val_loss: 0.4537\n", "Epoch 62/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 279ms/step - accuracy: 0.9552 - loss: 0.1254 - val_accuracy: 0.8589 - val_loss: 0.5982\n", "Epoch 63/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.9567 - loss: 0.1279 - val_accuracy: 0.8443 - val_loss: 0.6917\n", "Epoch 64/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m89s\u001b[0m 284ms/step - accuracy: 0.9634 - loss: 0.1068 - val_accuracy: 0.8834 - val_loss: 0.4579\n", "Epoch 65/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m89s\u001b[0m 285ms/step - accuracy: 0.9675 - loss: 0.0933 - val_accuracy: 0.8691 - val_loss: 0.6827\n", "Epoch 66/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m95s\u001b[0m 304ms/step - accuracy: 0.9714 - loss: 0.0819 - val_accuracy: 0.8788 - val_loss: 0.5256\n", "Epoch 67/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m93s\u001b[0m 297ms/step - accuracy: 0.9672 - loss: 0.0972 - val_accuracy: 0.8565 - val_loss: 0.5663\n", "Epoch 68/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m92s\u001b[0m 294ms/step - accuracy: 0.9640 - loss: 0.1079 - val_accuracy: 0.8810 - val_loss: 0.4636\n", "Epoch 69/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m104s\u001b[0m 332ms/step - accuracy: 0.9725 - loss: 0.0784 - val_accuracy: 0.8577 - val_loss: 0.4973\n", "Epoch 70/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m97s\u001b[0m 311ms/step - accuracy: 0.9629 - loss: 0.1124 - val_accuracy: 0.8669 - val_loss: 0.6146\n", "Epoch 71/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m106s\u001b[0m 340ms/step - accuracy: 0.9687 - loss: 0.0921 - val_accuracy: 0.8715 - val_loss: 0.4832\n", "Epoch 72/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m100s\u001b[0m 318ms/step - accuracy: 0.9716 - loss: 0.0872 - val_accuracy: 0.8445 - val_loss: 0.6765\n", "Epoch 73/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 279ms/step - accuracy: 0.9651 - loss: 0.1045 - val_accuracy: 0.8621 - val_loss: 0.6374\n", "Epoch 74/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.9641 - loss: 0.1208 - val_accuracy: 0.8531 - val_loss: 0.6291\n", "Epoch 75/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.9669 - loss: 0.1037 - val_accuracy: 0.8675 - val_loss: 0.5712\n", "Epoch 76/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.9676 - loss: 0.1039 - val_accuracy: 0.8647 - val_loss: 0.5575\n", "Epoch 77/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 279ms/step - accuracy: 0.9654 - loss: 0.1088 - val_accuracy: 0.8798 - val_loss: 0.5096\n", "Epoch 78/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m88s\u001b[0m 280ms/step - accuracy: 0.9704 - loss: 0.0928 - val_accuracy: 0.8143 - val_loss: 0.9064\n", "Epoch 79/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 279ms/step - accuracy: 0.9386 - loss: 0.1918 - val_accuracy: 0.8621 - val_loss: 0.6179\n", "Epoch 80/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 279ms/step - accuracy: 0.9692 - loss: 0.0961 - val_accuracy: 0.8659 - val_loss: 0.5571\n", "Epoch 81/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 278ms/step - accuracy: 0.9760 - loss: 0.0721 - val_accuracy: 0.8719 - val_loss: 0.8253\n", "Epoch 82/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.9751 - loss: 0.0857 - val_accuracy: 0.8559 - val_loss: 0.6966\n", "Epoch 83/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.9750 - loss: 0.0845 - val_accuracy: 0.8387 - val_loss: 0.8816\n", "Epoch 84/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.9675 - loss: 0.1127 - val_accuracy: 0.8729 - val_loss: 0.5734\n", "Epoch 85/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 278ms/step - accuracy: 0.9703 - loss: 0.1002 - val_accuracy: 0.8485 - val_loss: 0.6070\n", "Epoch 86/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 279ms/step - accuracy: 0.8775 - loss: 0.3334 - val_accuracy: 0.8033 - val_loss: 0.6031\n", "Epoch 87/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 279ms/step - accuracy: 0.9396 - loss: 0.1664 - val_accuracy: 0.8483 - val_loss: 0.5745\n", "Epoch 88/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m88s\u001b[0m 280ms/step - accuracy: 0.9685 - loss: 0.0909 - val_accuracy: 0.8701 - val_loss: 0.5936\n", "Epoch 89/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 280ms/step - accuracy: 0.9808 - loss: 0.0539 - val_accuracy: 0.8900 - val_loss: 0.5556\n", "Epoch 90/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 278ms/step - accuracy: 0.9860 - loss: 0.0455 - val_accuracy: 0.8792 - val_loss: 0.6251\n", "Epoch 91/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.9842 - loss: 0.0473 - val_accuracy: 0.8635 - val_loss: 0.7786\n", "Epoch 92/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.9871 - loss: 0.0405 - val_accuracy: 0.8699 - val_loss: 0.6566\n", "Epoch 93/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 278ms/step - accuracy: 0.9833 - loss: 0.0583 - val_accuracy: 0.8778 - val_loss: 0.7701\n", "Epoch 94/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 279ms/step - accuracy: 0.9854 - loss: 0.0500 - val_accuracy: 0.8477 - val_loss: 0.5307\n", "Epoch 95/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 278ms/step - accuracy: 0.9766 - loss: 0.0759 - val_accuracy: 0.8739 - val_loss: 0.7431\n", "Epoch 96/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 280ms/step - accuracy: 0.9822 - loss: 0.0606 - val_accuracy: 0.8443 - val_loss: 0.9705\n", "Epoch 97/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 278ms/step - accuracy: 0.9854 - loss: 0.0469 - val_accuracy: 0.8707 - val_loss: 0.7642\n", "Epoch 98/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 277ms/step - accuracy: 0.9818 - loss: 0.0676 - val_accuracy: 0.8790 - val_loss: 0.7260\n", "Epoch 99/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 278ms/step - accuracy: 0.9854 - loss: 0.0496 - val_accuracy: 0.8555 - val_loss: 0.8581\n", "Epoch 100/100\n", "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m87s\u001b[0m 278ms/step - accuracy: 0.9840 - loss: 0.0528 - val_accuracy: 0.8824 - val_loss: 0.6971\n" ] }, { "data": { "text/plain": [ "