# KNN R, K-Nearest Neighbor implementation in R using caret package

knn classifier implementation with r caret package

# Knn classifier implementation in R with caret package

In this article, we are going to build a Knn classifier using R programming language. We will use the R machine learning caret package to build our Knn classifier. In our previous article, we discussed the core concepts behind K-nearest neighbor algorithm. If you don’t have the basic understanding of Knn algorithm, it’s suggested to read our introduction to k-nearest neighbor article.

For Knn classifier implementation in R programming language using caret package, we are going to examine a wine dataset. Our motive is to predict the origin of the wine. As in our Knn implementation in R programming post, we built a Knn classifier in R from scratch, but that process is not a feasible solution while working on big datasets.

To work on big datasets, we can directly use some machine learning packages. Developer community of R programming language has built some great packages to make our work easier. The beauty of these packages is that they are well optimized and can handle maximum exceptions to make our job simple, we just need to call functions for implementing algorithms with the right parameters. For machine learning caret package is a nice package with proper documentation.

The principle behind KNN classifier (K-Nearest Neighbor) algorithm is to find K predefined number of training samples that are closest in the distance to a new point & predict a label for our new point using these samples.

## Euclidean Distance

The most commonly used distance measure is Euclidean distance. The Euclidean distance is also known as simply distance. The usage of Euclidean distance measure is highly recommended when data is dense or continuous. Euclidean distance is the best proximity measure.

KNN classifier is also considered to be an instance based learning / non-generalizing algorithm. It stores records of training data in a multidimensional space. For each new sample & particular value of K, it recalculates Euclidean distances and predicts the target class. So, it does not create a generalized internal model.

## Caret Package Installation

The R programming machine learning caret package( Classification And REgression Training) holds tons of functions that helps to build predictive models. It holds tools for data splitting, pre-processing, feature selection, tuning and supervised – unsupervised learning algorithms, etc. It is similar to sklearn library in python.

For using it, we first need to install it. Open R console and install it by typing:

caret package provides us direct access to various functions for training our model with various machine learning algorithms like Knn, SVM, decision tree, linear regression, etc.

## Knn implementation with caret package

### Wine Recognition Data Set Description

Wine recognition with knn in R

For this experiment, wines were grown in the same region in Italy but derived from 3 different cultivars. The analysis determined the quantities of 13 constituents found in each of the three types of wines. We have a dataset with 13 attributes having continuous values and one attribute with class labels of wine origin.

Using the wine dataset our task is to build a model to recognize the origin of the wine. The original owners of this dataset are Forina, M. et al. , PARVUS, Institute of Pharmaceutical and Food Analysis and Technologies, Via Brigata Salerno, 16147 Genoa, Italy. This wine dataset is hosted as open data on UCI machine learning repository.

#### The 13 Attributes of the dataset are:

1. Alcohol
2. Malic acid
3. Ash
4. Alkalinity of ash
5. Magnesium
6. Total phenols
7. Flavanoids
8. Nonflavonoids phenols
9. Proanthocyanins
10. Color intensity
11. Hue
12. OD280/OD315 of diluted wines
13. Proline

The attribute with the class label is at index 1. It consists of 3 values 1, 2 & 3. These class labels are going to be predicted by our KNN model.

### Wine Recognition Problem Statement:

To model a classifier for classifying the origin of the wine. The classifier should predict whether the wine is from origin “1” or “2” or “3”.

## Knn classifier implementation in R with Caret Package

### R caret Library:

For implementing Knn in r, we only need to import caret package. As we mentioned above, it helps to perform various tasks to perform our machine learning work.

### Data Import:

We are using wine dataset from UCI repository. For importing the data and manipulating it, we are going to use data frames. First of all, we need to download the dataset.

For importing data into an R data frame, we can use read.csv() method with parameters as a file name and whether our dataset consists o the 1st row with a header or not. If a header row exists then, the header should be set TRUE else header should  set to FALSE.

For checking the structure of data frame we can call the function str over wine_df:

It shows that our data consists of 178 observations and 14 columns. Value ranges of all attributes from V2-V14 are varying, so we will have to standardize the data before training our classifier.

### Data Slicing

Data slicing is a step to split data into train and test set. Training data set can be used specifically for our model building. Test dataset should not be mixed up while building model. Even during standardization, we should not standardize our test set.

The set.seed() method is used to make our work replicable. As we want our readers to learn concepts by coding these snippets. To make our answers replicable, we need to set a seed value. During partitioning of data, it splits randomly but if our readers will pass the same value in the set.seed() method. Then we both will get identical results.

The caret package provides a method createDataPartition() for partitioning our data into train and test set. We are passing 3 parameters. The “y” parameter takes the value of variable according to which data needs to be partitioned. In our case, target variable is at V1, so we are passing wine_df\$V1 (wine data frame’s V1 column).

The “p” parameter holds a decimal value in the range of 0-1. It’s to show that percentage of the split. We are using p=0.7. It means that data split should be done in 70:30 ratio. The “list” parameter is for whether to return a list or matrix. We are passing FALSE for not returning a list. The createDataPartition() method is returning a matrix “intrain” with record’s indices.

By passing values of intrain, we are splitting training data and testing data.
The line  training <- wine_df[intrain,] is for putting the data from data frame to training data. Remaining data is saved in the testing data frame,  testing <- wine_df[-intrain,].

For checking the dimensions of our training data frame and testing data frame, we can use these:

### Preprocessing & Training

Preprocessing is all about correcting the problems in data before building a machine learning model using that data. Problems can be of many types like missing values, attributes with a different range, etc.

To check whether our data contains missing values or not, we can use anyNA() method. Here, NA means Not Available.

Since it’s returning FALSE, it means we don’t have any missing values.

#### Wine Dataset summarized details

For checking the summarized details of our data, we can use summary() method. It will give us a basic idea about our dataset’s attributes range.

From above summary statistics, it shows us that all the attributes have a different range. So, we need to standardize our data. We can standardize data using caret’s preProcess() method.

Our target variable consists of 3 values 1, 2, 3. These should considered as categorical variables. To convert these to categorical variables, we can convert them to factors.

The above line of code will convert training data frame’s “V1” column to factor variable.

Now, it’s time to train our model.

### Training the Knn model

Caret package provides train() method for training our data for various algorithms. We just need to pass different parameter values for different algorithms. Before train() method, we will first use trainControl() method. It controls the computational nuances of the train() method.

We are setting 3 parameters of trainControl() method. The “method” parameter holds the details about resampling method. We can set “method” with many values like  “boot”, “boot632”, “cv”, “repeatedcv”, “LOOCV”, “LGOCV” etc. For this tutorial, let’s try to use repeatedcv i.e, repeated cross-validation.

The “number” parameter holds the number of resampling iterations. The “repeats ” parameter contains the complete sets of folds to compute for our repeated cross-validation. We are using setting number =10 and repeats =3. This trainControl() methods returns a list. We are going to pass this on our train() method.
Before training our knn classifier, set.seed().

For training knn classifier, train() method should be passed with “method” parameter as “knn”. We are passing our target variable V1. The V1~. denotes a formula for using all attributes in our classifier and V1 as the target variable. The “trControl” parameter should be passed with results from our trianControl() method. The “preProcess”  parameter is for preprocessing our training data.

As discussed earlier for our data, preprocessing is a mandatory task. We are passing 2 values in our “preProcess” parameter “center” & “scale”. These two help for centering and scaling the data. After preProcessing these convert our training data with mean value as approximately “0” and standard deviation as “1”. The “tuneLength” parameter holds an integer value. This is for tuning our algorithm.

#### Trained Knn model result

You can check the result of our train() method. We are saving its results in a knn_fit variable.

Its showing Accuracy and Kappa metrics result for different k value. From the results, it automatically selects best k-value. Here, our training model is choosing k = 21 as its final value.

We can see variation in Accuracy w.r.t K value by plotting these in a graph.

Accuracy vs K-Value

### Test Set Prediction

Now, our model is trained with K value as 21. We are ready to predict classes for our test set. We can use predict() method.

caret package provides predict() method for predicting results. We are passing 2 arguments. The first parameter is our trained model and second parameter “newdata” holds our testing data frame. The predict() method returns a list, we are saving it in a test_pred variable.

#### How Accurately our model is working?

Using confusion matrix, we can print statistics of our results. It shows that our model accuracy for test set is 96.23%.

You can download code from our github code repo. It can be accessed from here.

### References:

I hope you like this post. If you have any questions, then feel free to comment below.  If you want me to write on one particular topic, then do tell it to me in the comments below.

### Related Courses:

Do check out unlimited data science courses

 Title & links Details What You Will Learn Machine Learning A-Z: Hands-On Python & R In Data Science Students Enrolled :: 19,359 Course Overall Rating:: 4.6 Master Machine Learning on Python & R Make robust Machine Learning models. Handle specific topics like Reinforcement Learning, NLP and Deep Learning. Build an army of powerful Machine Learning models and know how to combine them to solve any problem. R Programming A-Z: R For Data Science With Real Exercises! Students Enrolled :: 12,001 Course Overall Rating :: 4.6 Program in R at a good level. Learn the core principles of programming. Understand the Normal distribution. Practice working with statistical, financial and sport data in R Data Mining with R: Go from Beginner to Advanced! Students Enrolled :: 2,380 Course Overall Rating :: 4.2 Use R software for data import and export, data exploration and visualization, and for data analysis tasks, including performing a comprehensive set of data mining operations. Apply the dozens of included “hands-on” cases and examples using real data and R scripts to new and unique data analysis and data mining problems. Effectively use a number of popular, contemporary data mining methods and techniques in demand by industry including: (1) Decision, classification and regression trees (CART); (2) Random forests; (3) Linear and logistic regression; and (4) Various cluster analysis techniques.

• I used the same data to test my Quantized classifier that is written in GO. https://bitbucket.org/joexdobs/ml-classifier-gesture-recognition It got 96% accuracy on the test data set. This data must be easily classified.

The bat file to run it is ClassifyTestWine.bat I used our file split utility to split the input into a .test.csv and .train.csv file so we had 25 rows in the test set.

You can download the code quantized engine and the test direct from bitbucket. The engine is free on a MIT license.

Question: How do you submit articles to the datasprint site?

• Jose Banuelos says:

Thanks a lot! i was using the class package but it’s very barebones compared with this one. Clearly explained. Thanks!

• Gene Nguyen says:

Great and clear write-up. Can you elaborate what “tuneLength” does when fitting the model? Thanks!

• Hi Gene Nguyen,