. "Artificial Intelligence"@en . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . "AI Skills for Engineers: Supervised Machine Learning"@en . . . . "https://online-learning.tudelft.nl/courses/ai-skills-for-engineers-supervised-machine-learning/"@en . "Online" . "NaN" . "Online"@en . "Learn the fundamentals of machine learning to help you correctly apply various classification and regression machine learning algorithms to real-life problems using the Python toolbox scikit-learn.\n\nMachine learning classification and regression techniques have potential uses in various engineering disciplines. These machine learning models allow you to make predictions for a category (classification) or for a number (regression) given sensor data, and can be used in, for example, predicting properties of objects (such as their weight or shape). \n\nUsing hands-on and interactive exercises you will get insight into:\n\nMachine learning and its variants, such as supervised learning, semi-supervised learning, unsupervised learning and reinforcement learning. \n\nRegression techniques such as linear regression, K-nearest neighbor regression, how to deal with outliers and evaluation metrics such as the mean squared error (MSE) and mean absolute error (MAE). \n\nClassification techniques such as the histogram method, the nearest mean (or nearest medoid) method and the nearest neighbor classifier. We cover the classification setting and important concepts such as the Bayes classifier and the Bayes error, the optimal classifier in theory. \n\nTraining models using (stochastic) gradient descent and its variants, we learn how to tune this optimizer, and how to use it to construct a logistic regression classification model. \n\nOverfitting means a classifier works well on a training set but not on unseen test data. We discuss how to build complex non-linear models, and we analyze how we can understand overfitting using the bias-variance decomposition and the curse of dimensionality. Finally, we discuss how to evaluate fairly and tune machine learning models and estimate how much data they need for an efficient performance.\n\nRegularization methods can help to mitigate overfitting. We discuss two regularization techniques for estimating the linear regression coefficients: ridge regression and LASSO. The latter can also be used for variable selection. \n\nClassifier evaluation metrics such as the ROC curve and confusion matrix can give more insight into the performance of classifiers. We also discuss what constitutes a “good” accuracy; this is given by so-called dummy-classifiers which are naïve baselines. \n\nSupport Vector Machines (SVMs) are more advanced classification models that can provide good performance even in high-dimensional spaces and with little data. We discuss their different variants such as the soft-margin SVM, the hard-margin SVM and the nonlinear kernel SVM. \n\nDecision Trees are simple models that can easily be understood by lay people. They are easy to use and visualize, and instead of a black box they can be easily understood as an interpretable white box model, making them suitable for various applications. \n\nThe lectures feature a unique combination of videos mixed with hands-on interaction with machine learning algorithms to stimulate a deeper understanding. In the exercises you apply the algorithms in Python using scikit-learn and in the final project you will further deepen your understanding of the various concepts by building and tuning a machine learning pipeline from start to finish. \n\nWhat You'll Learn\nApply common operations (pre-processing, plotting, etc.) to datasets using Python.\nExplain the concept of supervised, semi-supervised, unsupervised machine learning and reinforcement learning.\nExplain how various supervised learning models work and recognize their limitations.\nAnalyze which factors impact the performance of learning algorithms.\nApply learning algorithms to datasets using Python and Scikit-learn and evaluate their performance.\nOptimize a machine learning pipeline using Python and Scikit-learn.\n\nCourse Syllabus\nTopic 1: Introduction\nThis is an introduction to the course with an overview of the topics. We give a brief introduction to machine learning and its different variants.\n\nWhy use machine learning?\nMachine learning basics and terminology\nThe biggest challenge in machine learning\nMachine learning frameworks: supervised, semi-supervised, unsupervised and reinforcement learning\nTopic 2: Regression\nWe will make a gentle start with regression. In the regression setting, a machine learning model will need to predict a number.\n\nThe regression setting and its assumptions\nThe mean squared error (MSE) and mean absolute error (MAE)\nOutliers in regression\nLinear regression and K-nearest neighbour regression\nTopic 3: Classification \nIn classification, a machine learning model will need to predict a category or class. \n\nTerminology and basics of classification\nBuilding classifiers using histograms, nearest mean (nearest medoid) classifier, K-nearest neighbour (KNN) classifier\nThe Bayes classifier and the Bayes error\nHow to use the KNN classifier in practice\nTopic 4: Training Models\nGradient descent is an iterative procedure to train models, such as logistic regression and neural networks.\n\nThe basics of gradient descent\nThe three variants of gradient descent: batch, mini-batch and stochastic gradient descent (SGD)\nHow to tune gradient descent\nThe basics of logistic regression\nTopic 5: Overfitting \nOverfitting is the problem where a machine learning algorithm performs well on the training set but does not perform well on new and unseen data. \n\nHow to use linear models for nonlinear tasks?\nThe bias-variance trade-off and the curse of dimensionality\nHow to use learning curves to estimate the amount of data needed\nTopic 6: Cross Validation & Regularization\nTo get a good estimate of the performance of machine learning models, cross validation is an essential technique. This is also important to tune hyperparameters of models. Finally, we discuss regularization, a technique that aims to avoid overfitting. \n\nCross validation, model selection and hyperparameter tuning\nRidge regression\nLASSO regularization and how it’s used for variable selection\nTopic 7: Classifier Evaluation\nClassifier evaluation delves deeper into the various evaluation metrics for classifiers.\n\nWhat a “good” accuracy means (e.g., naïve baselines/dummy classifiers)\nThe confusion matrix (false positive, false negative, costs)\nROC-curves\nTopic 8: Support Vector Machines\nThe support vector machine is a well-known more advanced classification model.\n\nBasics of the SVM, the margin and the hard-margin SVM\nThe soft-margin SVM\nKernels\nTopic 9: Decision Trees \nDecision trees are simple and interpretable models that are very user-friendly. \n\nBasics of decision trees and their terminology\nHow to train decision trees with CART\nOverfitting and other pros and cons of decision trees\nTopic 10: Final Project\nThe final project will involve building a machine learning pipeline, including hyperparameter tuning and a careful and fair evaluation, to solve a small practical application, that is the recognition of handwritten digits (MNIST)." . "1.5" . "NaN"@en . . . "Delft University of Technology"@en . "Mekelweg 5, 2628 CD, Delft"@en . "https://www.tudelft.nl/en/" . "Netherlands" . "7"^^ . "Delft"@en . . "English"@en . . "Other Statistics (rather Than Geostatistics) Kas"@en . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .