1st Edition

Machine Learning Toolbox for Social Scientists Applied Predictive Analytics with R

By Yigit Aydede Copyright 2024
    600 Pages 193 Color & 36 B/W Illustrations
    by Chapman & Hall

    600 Pages 193 Color & 36 B/W Illustrations
    by Chapman & Hall

    Machine Learning Toolbox for Social Scientists covers predictive methods with complementary statistical "tools" that make it mostly self-contained. The inferential statistics is the traditional framework for most data analytics courses in social science and business fields, especially in Economics and Finance. The new organization that this book offers goes beyond standard machine learning code applications, providing intuitive backgrounds for new predictive methods that social science and business students can follow. The book also adds many other modern statistical tools complementary to predictive methods that cannot be easily found in "econometrics" textbooks: nonparametric methods, data exploration with predictive models, penalized regressions, model selection with sparsity, dimension reduction methods, nonparametric time-series predictions, graphical network analysis, algorithmic optimization methods, classification with imbalanced data, and many others. This book is targeted at students and researchers who have no advanced statistical background, but instead coming from the tradition of "inferential statistics". The modern statistical methods the book provides allows it to be effectively used in teaching in the social science and business fields.

    Key Features:

    • The book is structured for those who have been trained in a traditional statistics curriculum.
    • There is one long initial section that covers the differences in "estimation" and "prediction" for people trained for causal analysis.
    • The book develops a background framework for Machine learning applications from Nonparametric methods.
    • SVM and NN simple enough without too much detail. It’s self-sufficient.
    • Nonparametric time-series predictions are new and covered in a separate section.
    • Additional sections are added: Penalized Regressions, Dimension Reduction Methods, and Graphical Methods have been increasing in their popularity in social sciences.

    1. How We Define Machine Learning  2. Preliminaries  Part 1. Formal Look at Prediction  3. Bias-Variance Tradeoff  4. Overfitting  Part 2. Nonparametric Estimations  5. Parametric Estimations  6. Nonparametric Estimations - Basics  7. Smoothing  8. Nonparametric Classifier - kNN  Part 3. Self-learning  9. Hyperparameter Tuning  10. Tuning in Classification  11. Classification Example  Part 4. Tree-based Models  12. CART  13. Ensemble Learning  14. Ensemble Applications  Part 5. SVM & Neural Networks  15. Support Vector Machines  16. Artificial Neural Networks  Part 6. Penalized Regressions  17. Ridge  18. Lasso  19. Adaptive Lasso  20. Sparsity  Part 7. Time Series Forecasting  21. ARIMA models  22. Grid Search for Arima  23. Time Series Embedding  24. Random Forest with Times Series  25. Recurrent Neural Networks  Part 8. Dimension Reduction Methods  26. Eigenvectors and eigenvalues  27. Singular Value Decomposition  28. Rank r approximations  29. Moore-Penrose Inverse  30. Principle Component Analysis  31. Factor Analysis  Part 9. Network Analysis  32. Fundamentals  33. Regularized Covariance Matrix  Part 10. R Labs  34. R Lab 1 Basics  35. R Lab 2 Basics II  36. Simulations in R  37. Algorithmic Optimization  38. Imbalanced Data

    Biography

    Yigit Aydede is a Sobey Professor of Economics at Saint Mary’s University, Halifax, Nova Scotia, Canada. He is a founder member of the Research Portal on Machine Learning for Social and Health Policy, a joint initiative by a group of researchers from Saint Mary’s and Dalhousie universities