Now Reading
Fruit Recognition using the Convolutional Neural Network

Fruit Recognition using the Convolutional Neural Network

Dr. Vaibhav Kumar
fruit recognition

Object detection and recognition is a demanding work belonging to the field of computer vision. Objects in the images are detected and recognized using machine learning models when trained on a sufficient number of available images. When applying deep learning models in this task when we have a large number of training images, the accuracy of object recognition is improved. This concept motivates us in developing such a model which can recognize a fruit and predicts its name. There may be a variety of applications of fruit recognition in agricultural work when we are to recognize thousands of fruit images in a less amount of time. It can also be applied in automating the billing process at a fruit shop where the model can recognize the fruit and calculate its price by multiplying with weight.

In this article, we will recognize the fruit where the Convolutional Neural Network will predict the name of the fruit given its image. We will train the network in a supervised manner where images of the fruits will be the input to the network and labels of the fruits will be the output of the network. After successful training, the CNN model will be able to correctly predict the label of the fruit. 

The  Data Set

The data set used in this article is taken from ‘Fruit Images for Object Detection’ dataset that is publicly available on Kaggle. This is a small data set consisting of 240 training images and 60 test images. All the images belong to the three types of fruits – Apple, Banana and Orange. 



Implementing Fruit Recognition

This code was implemented in Google Colab and the .py file was downloaded.

# -*- coding: utf-8 -*-
"""Fruit.ipynb

Automatically generated by Colaboratory.

Original file is located at
    https://colab.research.google.com/drive/1aIDTOiVdKbSrSHqDxxxxxxxxxxxxxxxx
"""

The dataset was uploaded to google drive and the drive was mounted with the Colab notebook. The below code snippets are used for that purpose.

#Setting google drive as a directory for dataset
from google.colab import drive 
drive.mount('/content/gdrive')

The directory of the Fruit image dataset

dir_path = "gdrive/My Drive/Dataset/Fruit Images/"

Importing some required libraries

#Importing Library
import numpy as np
import pandas as pd
import cv2
import os 
from PIL import Image

Here, we will check the files in the directory

#Checking the directory
import os
for dirname, _, filenames in os.walk(dir_path):
    for filename in filenames:
        print(os.path.join(dirname, filename))
















We can verify the contents of the directory in this way. Using the below code snippet, we will get all the images and their labels. These labels will be obtained from the names of the image files.



images  =  []       
labels  =  [] 
train_path  =  'gdrive/My Drive/Dataset/Fruit Images/train_zip/train'
for filename in os.listdir('gdrive/My Drive/Dataset/Fruit Images/train_zip/train'):
    if filename.split('.')[1]  =  =  'jpg':
        img  =  cv2.imread(os.path.join(train_path,filename))
        arr = Image.fromarray(img,'RGB')
        img_arr = arr.resize((50,50))
        labels.append(filename.split('_')[0])
        images.append(np.array(img_arr))

After obtaining all the labels, we will print them.

See Also
Handwritten Character Digit Classification
Handwritten Character Digit Classification using Neural Network

#Image Labels
np.unique(labels)

All the labels in the text form stored in the labels array will be encoded by label encoding to transform them as the output labels.

from sklearn.preprocessing import LabelEncoder
lb_encod  =  LabelEncoder()
labels = pd.DataFrame(labels)
labels = lb_encod.fit_transform(labels[0])
labels












#Visualizing image
import matplotlib.pyplot as plt
figure = plt.figure(figsize = (8,8))
ax = figure.add_subplot(121)
ax.imshow(images[0])
bx = figure.add_subplot(122)
bx.imshow(images[60])
plt.show()
Fruit Recognition














#In the next step, we will preprocess the image data
#Saving the image array and corresponding labels
images = np.array(images)
np.save("image",images)
np.save("labels",labels)

#Loading the images and labels that we have saved above
image = np.load("image.npy",allow_pickle = True)
labels = np.load("labels.npy",allow_pickle = True)

img_shape  = np.arange(image.shape[0])
np.random.shuffle(img_shape)
image = image[img_shape]
labels = labels[img_shape]

Now, we will define the train and the test data set.

num_classes = len(np.unique(labels))
len_data = len(image)
x_train, x_test = image[(int)(0.1*len_data):],image[:(int)(0.1*len_data)]
y_train,y_test = labels[(int)(0.1*len_data):],labels[:(int)(0.1*len_data)]

import keras
y_train = keras.utils.to_categorical(y_train,num_classes)
y_test = keras.utils.to_categorical(y_test,num_classes)

Convolutional Neural Network

After defining the training and test sets, we will define and train the convolutional neural network model. As a kernel regularization, we will use the L2 regularization method.

from keras.models import Sequential
from keras.layers import Dense,Conv2D,MaxPooling2D,Dropout,Flatten,MaxPool2D
from keras.optimizers import RMSprop,Adam
from keras.layers import Activation, Convolution2D, Dropout, Conv2D,AveragePooling2D, BatchNormalization,Flatten,GlobalAveragePooling2D
from keras import layers
from keras.regularizers import l2
from keras.callbacks import ModelCheckpoint,ReduceLROnPlateau

l2_reg = 0.001
opt = Adam(lr = 0.001)

#Defining the CNN Model
cnn_model  =  Sequential()
cnn_model.add(Conv2D(filters = 32, kernel_size = (2,2), input_shape = (50,50, 3), activation = 'relu',kernel_regularizer = l2(l2_reg)))
cnn_model.add(MaxPool2D(pool_size = (2,2)))
cnn_model.add(Conv2D(filters = 64, kernel_size = (2,2), activation = 'relu',kernel_regularizer = l2(l2_reg)))
cnn_model.add(MaxPool2D(pool_size = (2,2)))
cnn_model.add(Conv2D(filters = 128, kernel_size = (2,2), activation = 'relu',kernel_regularizer = l2(l2_reg)))
cnn_model.add(MaxPool2D(pool_size = (2,2)))
cnn_model.add(Dropout(0.1))

cnn_model.add(Flatten())

cnn_model.add(Dense(64, activation = 'relu'))
cnn_model.add(Dense(16, activation = 'relu'))
cnn_model.add(Dense(4, activation = 'softmax'))

#CNN Model Summary
cnn_model.summary()
































#Compiling the model
cnn_model.compile(loss = 'categorical_crossentropy', optimizer = opt, metrics = ['accuracy'])

#Training the CNN Model
file1 = ‘weights.hdf5’
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
history = model.fit(x_train,y_train,batch_size = 128,epochs = 110,verbose = 1,validation_split = 0.33)














#Check the performance
scores  =  cnn_model.evaluate(x_test, y_test, verbose = 1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])






#Visualize the performance
figure = plt.figure(figsize = (10,5))
ax = figure.add_subplot(121)
ax.plot(history.history['accuracy'])
ax.plot(history.history['val_accuracy'])
ax.legend(['Training Accuracy','Val Accuracy'])
bx = figure.add_subplot(122)
bx.plot(history.history['loss'])
bx.plot(history.history['val_loss'])
bx.legend(['Training Loss','Val Loss'])

Fruit Recognition
















After the successful training, we will test the model in predicting the class labels for the fruit images.

#Test
test_path  =  'gdrive/My Drive/Dataset/Fruit Images/test_zip/test'
t_labels = []
t_images = []
for filename in os.listdir('gdrive/My Drive/Dataset/Fruit Images/test_zip/test'):
    if filename.split('.')[1]  =  =  'jpg':
        img  =  cv2.imread(os.path.join(test_path,filename))
        arr = Image.fromarray(img,'RGB')
        img_arr = arr.resize((50,50))
        t_labels.append(filename.split('_')[0])
        t_images.append(np.array(img_arr))

test_images = np.array(test_images)
np.save("test_image",test_images)
test_image = np.load("image.npy",allow_pickle = True)

pred = np.argmax(model.predict(test_image),axis = 1)
prediction  =  la.inverse_transform(pred)

test_image = np.expand_dims(test_image[25],axis = 0)
pred_test = np.argmax(model.predict(test_image),axis = 1)
prediction_test  =  la.inverse_transform(pred_test)

print(prediction_test[0])
plt.imshow(test_images[11])

fruit recognition 















You can run this test for more number of fruit images and check whether the predicted label is correct or not. As we have received very good accuracy, the CNN model will certainly predict the correct label. We can run the entire process by tuning the parameters to get better performance if we get some incorrect labels. This article motivates us to extend this work in recognizing more classes of foods and vegetables.

Provide your comments below

comments


If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top