This package can be used for dominance analysis or Shapley Value Regression for finding relative importance of predictors on given dataset. This library can be used for key driver analysis or marginal resource allocation models.
Built house price prediction model using linear regression and k nearest neighbors and used machine learning techniques like ridge, lasso, and gradient descent for optimization in Python
In this project you will build and evaluate multiple linear regression models using Python. You will use scikit-learn to calculate the regression, while using pandas for data management and seaborn for data visualization. The data for this project consists of the very popular Advertising dataset to predict sales revenue based on advertising spending through media such as TV, radio, and newspaper.
Multiple regression is an extension of simple linear regression. It is used when we want to predict the value of a variable based on the value of two or more other variables. The variable we want to predict is called the dependent variable (or sometimes, the outcome, target or criterion variable)
Investigated the influence of economic, birth, and health factors on Chicago neighborhood homicide rates using correlation, simple regression, and multiple regression analyses. Created a heatmap to visualize differences in homicide rates between Chicago neighborhoods.
Projects done as a part of my statistics course STAT-512: Design and Data Analysis for Researchers II at Colorado State University, during the Fall of 2019.
This repository contains all the Machine Learning projects that I have developed/worked in the areas of Natural Language Processing and Computer Vision by using the Machine Learning frameworks such as scikit-learn and h2o.
Predictive analysis, with feature engineering, and machine learning (ML) algorithms, such as linear regression, applied to predict the final sale price of homes in Ames, IA from 2006-2010.
The MATLAB code analyses stock prices of a company and predicts the closing price. The algorithms implemented for predicting closing price are: (a)Kalman Filter (b)Kalman Multiple Linear Regression The algorithms implemented for analysing the trends in a stock (c) Bollinger bands (d). Chaikin Oscillator Output - 1. Graphs showing the predicted and actual values of closing price of stock anlong with bollinger bands 2. The chaikin oscillator graph 3. %accuracy of prediction of Kalman and MLR filter The stock_analysis.zip file contains the following - 1. Code (a)stock_analysis.m (b).kalman1.m (c)bollinger.m (d)multiple_linear_regress.market (e). chaikin.m (f).ma_filter.m 2. Data - 2 .mat files having opening,closing, high,low and volume of a stock (a) comp_1.mat and (b)comp_2.mat To run the stock market analysis code - 1.Run stock_analysis.m 2.Enter file name
I constructed a simulation study to evaluate the statistical performance of two equivalence-based tests and compared it to the common, but inappropriate, method of concluding no effect by failing to reject the null hypothesis of the traditional test. I further propose two R functions to supply researchers with open-access and easy-to-use tools that they can flexibly adopt in their own research.
Using multiple regression to analyse a Boston housing dataset with multiple predictor variables. The data is publicly available and sourced from the U.S Census Service.