The Wayback Machine - https://web.archive.org/web/20210801202716/https://github.com/topics/support-vector-classifier
Skip to content
#

support-vector-classifier

Here are 38 public repositories matching this topic...

🏆 A Comparative Study on Handwritten Digits Recognition using Classifiers like K-Nearest Neighbours (K-NN), Multiclass Perceptron/Artificial Neural Network (ANN) and Support Vector Machine (SVM) discussing the pros and cons of each algorithm and providing the comparison results in terms of accuracy and efficiecy of each algorithm.
  • Updated Jan 17, 2021
  • Python

Identification of the Employees who are most likely to switch the jobs for package negotiations & Job offerings. Also, Analyzing the particular departments where the attrition rate is high and to take the preventive measures by using Decision Tree Classifer, Random Forest Classifier, Support Vector Classifier, Logistic Regression, K Nearest Neighbor and Gaussian Naive Bayes.
  • Updated Jul 10, 2021
  • Jupyter Notebook

I contributed to a group project using the Life Expectancy (WHO) dataset from Kaggle where I performed regression analysis to predict life expectancy and classification to classify countries as developed or developing. The project was completed in Python using the pandas, Matplotlib, NumPy, seaborn, scikit-learn, and statsmodels libraries. The regression models were fitted on the entire dataset, along with subsets for developed and developing countries. I tested ordinary least squares, lasso, ridge, and random forest regression models. Random forest regression performed the best on all three datasets and did not overfit the training set. The testing set R2 was .96 for the entire dataset and developing country subset. The developed country subset achieved an R2 of .8. I tested seven different classification algorithms to classify a country as developing or developed. The models obtained testing set balanced accuracies ranging from 86% - 99%. From best to worst, the models included gradient boosting, random forest, Adaptive Boosting (AdaBoost), decision tree, k-nearest neighbors, support-vector machines, and naive Bayes. I tuned all the models' hyperparameters. None of the models overfitted the training set.
  • Updated Jun 29, 2021
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the support-vector-classifier topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the support-vector-classifier topic, visit your repo's landing page and select "manage topics."

Learn more