. "Computer Science"@en . . "English"@en . . "Mathematics"@en . . "Data mining"@en . . "6.0" . "Data mining is a major frontier field of computer science. It allows extracting useful and interesting patterns and knowledge from large data repositories such as databases and the Web. Data mining integrates techniques from the fields of databases, machine learning, statistics, and artificial intelligence. This course will present the state-of-the-art techniques of data mining. The lectures and labs will emphasize the practical use of the presented techniques and the problems of developing real data-mining applications. A step-by-step introduction to data-mining environments will enable the students to achieve specific skills, autonomy, and hands-on experience. A number of real data sets will be analysed and discussed.\n\nPrerequisites\nNone.\n\nRecommended reading\nPang-Ning, T., Steinbach, M., Karpatne, A., and Kumar, V. (2018). Introduction to Data Mining, 2nd Edition, Pearson, ISBN-10: 0133128903, ISBN-13: 978-0133128901\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/462683/data-mining" . . "Presential"@en . "TRUE" . . "Model Identification and data fitting"@en . . "6.0" . "This course is devoted to the various practical and theoretical aspects which involve the estimation (the identification) of a mathematical model within a given model class, starting from a record of observed measurement data (input-output data). First, we address distance measures, norms, and criterion functions. Then we discuss the prediction error identification of linear regression models, with special emphasis on the various interpretations of such models (deterministic, stochastic with Gaussian white noise and maximum likelihood estimation, stochastic in a Bayesian estimation context) and on numerical implementation aspects (recursion, numerical complexity, numerical conditioning and square root filtering). Next, we study identification within the important class of auto-regressive dynamical models, to which the Levinson algorithm applies. Other related topics receiving attention are identifiability, model reduction and model approximation. Some techniques for the estimation of linear dynamical i/o-systems are illustrated with the system identification toolbox in Matlab.\n\nPrerequisites\nLinear Algebra, Mathematical Modelling, Probability and Statistics.\n\nRecommended reading\nL. Ljung, System Identification: Theory for the User (2nd ed.), Prentice-Hall, 1999.\nT. Soderstrom and P. Stoica, System Identification, Prentice-Hall, 1989.\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/463537/model-identification-and-data-fitting" . . "Presential"@en . "TRUE" . . "Research project dsdm 1"@en . . "6.0" . "The research project takes place during the three periods of the semester. Project topics are presented at the start of the semester and assigned to students based on their preferences and availability. The emphasis in the first phase is on initial study of the context set out for the project and the development of a project plan. In the second period, the goal is to start modelling, prototyping and developing. In period 3, the implementation, model and/or experiments set out in the project plan has to be finished and reported on. At the end of period 1 and 2, a progress presentation takes place. The project results in a project presentation, a project report and possibly a public website and/or product. \n\nThe Research Project 1 will start in period 1.1 and 1.2 with weekly meetings.\nThe credits for the project will become available at the end of period 1.3.\n\nPrerequisites\nNone.\n\nRecommended reading\nJustin Zobel (2004), Writing for Computer Science, Springer, ISBN:1852338024\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/465055/research-project-dsdm-1" . . "Presential"@en . "TRUE" . . "Computational statistics"@en . . "6.0" . "In this course, we will review basic concepts in statistical inference (confidence intervals, parameter estimation, and hypothesis testing). We will then study computer-intensive methods that work without imposing unrealistic or unverifiable assumptions about the data generating mechanism (randomization tests, the bootstrap, and Markov chain Monte Carlo). This will provide us with the foundations to study modern inference problems in statistics and machine learning (false discovery rates, Benjamini-Hochberg procedure, and causal inference).\n\nPrerequisites\nNone.\n\nDesired prior knowledge: Probability and Statistics\n\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/466187/computational-statistics" . . "Presential"@en . "TRUE" . . "Algorithms for big data"@en . . "6.0" . "The emergence of very large datasets poses new challenges for the algorithm designer. For example, the data may not fit into the main memory anymore, and caching from a hard-drive becomes a new bottleneck that needs to be addressed. Similarly, algorithms with larger than linear running time take simply too long on very large datasets. Moreover, simple sensory devices can observe large amount of data over time, but cannot store all the observed information due to insufficient storage, and an immediate decision of what to store and compute needs to be made. Classical algorithmic techniques do not address these challenges, and a new algorithmic toolkit needs to be developed. In this course, we will look at a number of algorithmic responses to these problems, such as: algorithms with (sub-)linear running times, algorithms where the data arrive as a stream, computational models where memory is organized hierarchically (with larger storage units, such as hard-drives, being slower to access than smaller, faster storage such as CPU cache memory). New programming paradigms and models such as MapReduce/Hadoop will be discussed. We will also look at a number of topics from classical algorithm design that have undiminished relevance in the era of big data such as approximation algorithms and multivariate algorithmic analysis.\n\nPrerequisites\nDesired prior knowledge: Discrete mathematics, algorithm design and analysis, elementary discrete probability\n\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/465457/algorithms-big-data" . . "Presential"@en . "TRUE" . . "Research project dsdm 2"@en . . "6.0" . "The research project takes place during the three periods of the semester. Project topics are presented at the start of the semester and assigned to students based on their preferences and availability. The emphasis in the first phase is on initial study of the context set out for the project and the development of a project plan. In the second period, the goal is to start modelling, prototyping and developing. In period 3, the implementation, model and/or experiments set out in the project plan has to be finished and reported on. At the end of period 1 and 2, a progress presentation takes place. The project results in a project presentation, a project report and possibly a public website and/or product. \n\nThe Research Project 2 will start in period 1.4 and 1.5 with weekly meetings.\nThe credits for the project will become available at the end of period 1.6.\n\nPrerequisites\nNone.\n\nRecommended reading\nJustin Zobel (2004), Writing for Computer Science, Springer, ISBN:1852338024\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/463937/research-project-dsdm-2" . . "Presential"@en . "TRUE" . . "Mathematical optimization"@en . . "6.0" . "Optimization (or “Optimisation”) is the subject of finding the best or optimal solution to a problem from a set of potential or feasible solutions.\nOptimization problems are fundamental in all forms of decision-making, since one wishes to make the best decision in any context, and in the analysis of data, where one wishes to find the best model describing experimental data. This course treats two different areas of optimization: nonlinear optimization and combinatorial optimization. Nonlinear optimization deals with the situation that there is a\ncontinuum of available solutions. A best solution is then usually approximated with one of several available general-purpose algorithms, such as Brent’s method for one-dimensional problems, Newton, quasi-Newton and conjugate gradient methods for unconstrained problems, and Lagrangian methods, including active-set methods, sequential quadratic programming and interior-point methods for general constrained problems. Combinatorial optimization deals with situations that a best solution from a finite number of available solutions must be chosen. A variety of techniques, such as linear programming, branch and cut, Lagrange relaxation dynamic programming and approximation algorithms are employed to tackle this type of problems. Throughout the course, we aim to provide a coherent framework for the subject, with a focus on consideration of optimality conditions (notably the Karush-Kuhn-Tucker conditions), Lagrange multipliers and duality, relaxation and approximate problems, and on convergence rates and computational complexity.\nThe methods will be illustrated by in-class computer demonstrations, exercises illustrating the main concepts and algorithms, and modelling and computational work on case studies of practical interest, such as optimal control and network flow.\n\nPrerequisites\nDesired Prior Knowledge: Simplex algorithm. Calculus, Linear Algebra.\n\nRecommended reading\n1. Nonlinear Programming, Theory and Algorithms, by Bazaraa, Sherali, and Shetty (Wiley). 2. Combinatorial Optimization, Algorithm and Complexity, by Papadimitriou and Steiglitz (Dover Publications).\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/464091/mathematical-optimization" . . "Presential"@en . "FALSE" . . "Stochastic decision making"@en . . "6.0" . "Any realistic model of a real-world phenomenon must take into account the possibility of randomness. That is, more often than not, the quantities we are interested in will not be predictable in advance but, rather, will exhibit an inherent variation that should be taken into account by the model. Mathematically, this is usually accomplished by allowing the model to be probabilistic in nature. In this course, the following topics will be discussed:\n\n(1) Basic concepts of probability theory: Probabilities, conditional probabilities, random variables, probability distribution functions, density functions, expectations and variances.\n\n(2) Finding probabilities, expectations and variances of random variables in complex probabilistic experiments.\n\n(3) Discrete and continuous time Markov chains and related stochastic processes like random walks, branching processes, Poisson processes, birth and death processes, queueing theory.\n\n(4) Markov decision problems.\n\n(5) Multi-armed bandit problems, bandit algorithms, contextual bandits, cumulative regret, and simple regret\n\nPrerequisites\nProbability & Statistics.\n\nRecommended reading\nProbability: A Lively Introduction by Henk Tijms; Reinforcement Learning by Richard S. Sutton and Andrew G. Barto (2nd ed.) (chapter 2); Bandit Algorithms by Tor Lattimore and Csaba Szepesvári\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/464293/stochastic-decision-making" . . "Presential"@en . "FALSE" . . "Signal and Image processing"@en . . "6.0" . "This course offers the student a hands-on introduction into the area of digital signal and image processing. We start with the fundamental concepts and mathematical foundation. This includes a brief review of Fourier analysis, z-transforms and digital filters. Classical filtering from a linear systems perspective is discussed. Next wavelet transforms and principal component analysis are introduced. Wavelets are used to deal with morphological structures in signals. Principal component analysis is used to extract information from high-dimensional datasets. We then discuss Hilbert-Huang Transform to perform detailed time-frequency analysis of signals. Attention is given to a variety of objectives, such as detection, noise removal, compression, prediction, reconstruction and feature extraction. We discuss a few cases from biomedical engineering, for instance involving ECG and EEG signals. The techniques are explained for both 1D and 2D (images) signal processing. The subject matter is clarified through exercises and examples involving various applications. In the practical classes, students will apply the techniques discussed in the lectures using the software package Matlab.\n\n \n\nPrerequisites\nDesired Prior Knowledge: Linear algebra, Calculus, basic knowledge of Matlab. Some familiarity with linear systems theory and transforms (such as Fourier and Laplace) is helpful.\n\nRecommended reading\nPrincipal Component Analysis, Ian T. Jolliffe, Springer, ISBN13: 978-0387954424.\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/466801/signal-and-image-processing" . . "Presential"@en . "FALSE" . . "Advanced concepts in machine learning"@en . . "6.0" . "This course will introduce a number of advanced concepts in the field of machine learning such as Support Vector Machines, Gaussian Processes, Deep Neural Networks, etc. All of these are approached from the view that the right data representation is imperative for machine learning solutions. Additionally, different knowledge representation formats used in machine learning are introduced. This course counts on the fact that basics of machine learning were introduced in other courses so that it can focus on more recent developments and state of the art in machine learning research. Labs and assignments will give the students the opportunity to implement or work with these techniques and will require them to read and understand published scientific papers from recent Machine Learning conferences.\n\nPrerequisites\nDesired Prior Knowledge: Machine Learning\n\nRecommended reading\nPattern Recognition and Machine Learning - C.M. Bishop; Bayesian Reasoning and Machine Learning - D. Barber; Gaussian Processes for Machine Learning - C.E. Rasmussen & C. Williams; The Elements of Statistical Learning - T. Hastie et al.\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/465009/advanced-concepts-machine-learning" . . "Presential"@en . "FALSE" . . "Deep learning for Image & video processing"@en . . "6.0" . "Applications of image and video processing will be presented, and connections to basic algorithms will be demonstrated. We will examine some of the most popular and widespread applications, namely security, surveillance, medical, traffic monitoring, astronomy, farming, culture. The methods used in these applications will be analysed in class and common characteristics between them will be explained. Students will be able to suggest further applications of interest to them and bring relevant literature to the class.\n\nPrerequisites\nDesired prior knowledge: Image and Video Processing, Calculus, Linear Algebra, Machine Learning.\n\nRecommended reading\nRafael C. Gonzalez and Richard E. Woods, Digital Image Processing (3rd Edition), Prentice Hall.\nA. Bovik (Ed.), The Essential Guide to Video Processing. Academic Press, 2009.\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/463079/deep-learning-image-video-processing" . . "Presential"@en . "FALSE" . . "Advanced natural language processing"@en . . "6.0" . "For decades, teaching a computer to deal with natural language processing (NLP) was a long-time dream of humankind. Task such as machine translation, summarization, question-answering, speech recognition or chatting remained a challenge for computer program. Around 2020, major improvements were made. Starting with machine translation and ultimately in late 2022 with ChatGPT. Why were these large-language models suddenly so good? How did we get here? What can we do with these new algorithms to improve them even more?\n\nThis course will provide the skills and knowledge to understand and develop state-of-the-art (SOTA) solutions for these natural language processing (NLP) tasks. After a short introduction to traditional generative grammars and statistical approaches to NLP, the course will focus on deep learning techniques. We will discuss Transformers, variations on their architecture (including BERT and GPT) in depth, which models works best for which tasks, their capacities, limitations and how to optimize these.\n\nAlthough that we have algorithms that can deal with Natural Language Processing in ways that can no longer be distinguished from humans, we still have some major problems to address: (i) we do not fully understand what these algorithms know and what they do not know. So, there is a strong need for eXplainable AI (XAI) in NLP. (ii) Training the deep-learning large language-models costs too much energy. We need to develop models that are less computationally (and thus energy) intensive. (iii) Now that these algorithms operate at human-level quality, several ethical problems arise related to computer generated fake-news, fake profiles, bias, and other abuse. But there are also ethical, legal, regulatory and privacy challenges. In this courses, these important topics will also be discussed.\n\nThis course is closely related with the course Information Retrieval and Text-Mining (IRTM). In this course the focus is more on advanced methods and architectures to deal with complex natural language tasks. The IRTM course focusses more on building search engines and text-analytics, but also uses a number of the architectures which are discussed in more depth in this course. The overlap between the two courses is kept to a minimum. There is no need to follow the courses in a specific order.\n\nPrerequisites\nNone.\n\nRecommended reading\nPapers published in top international conferences and journals in machine learning field.\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/465515/advanced-natural-language-processing" . . "Presential"@en . "FALSE" . . "Data fusion"@en . . "6.0" . "ICT development, e.g., remote sensing, IoT, lead to an enormous growth of available data for analysis. To integrate this heterogeneous or multimodal data, data fusion approaches are used. Data fusion can be understood as a framework for the joint analysis of data from multiple sources (modalities) that allows achieving information/knowledge not recoverable by the individual ones.\n\nDuring this course, several approaches to data fusion will be discussed, such as:\n\nLow level data fusion, where data fusion methods are directly applied to raw data sets for exploratory or predictive purposes. A main advantage is the possibility to interpret the results directly in terms of the original variables. An example of a low level data fusion is measuring the same signal or phenomena with different sensors, in order to discover the original one. Traditionally, PCA based methods are used for this type of data fusion.\nMid level data fusion, where data fusion operates on features extracted from each data set. The obtained features are then fused in a “new” data set, which is modeled to produce the desired outcome. A main advantage is that the variance can be removed in the features extraction step, and thus the final models may show better performance. An example of a mid level data fusion is extracting numerical features from an image, and building a decision model based on those features.\nHigh level data fusion, also known as decision fusion, where decisions (models outcome) from processing of each data set are fused. It is used when the main objective is to improve the performance of the final model and reach an automatic decision. Several methods can be used for high-level DF, such as weighted decision methods, Bayesian inference, Dempstere Shafer’s theory of evidence, and fuzzy set theory. There is a link between high-level data fusion and ensemble methods.\nFederated learning. Federated learning enables multiple parties jointly train a machine learning model without exchanging the local data. In case of federated learning, we can talk about model fusion.\nPrerequisites\nNone.\n\nDesired prior knowledge: statistics and basic machine learning\n\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/462543/data-fusion" . . "Presential"@en . "FALSE" . . "Explainable ai"@en . . "6.0" . "A key component of an artificially intelligent system is the ability to explain to a human agent the decisions, recommendations, predictions, or actions made by it and the process through which they are made. Such explainable artificial intelligence (XAI) can be required in a wide range of applications. For example, a regulator of waterways may use a decision support system to decide which boats to check for legal infringements, a concerned citizen might use a system to find reliable information about a new disease, or an employer might use an artificial advice-giver to choose between potential candidates fairly. For explanations from intelligent systems to be useful, they need to be able to justify the advice they give in a human-understandable way. This creates a necessity for techniques for automatic generation of satisfactory explanations that are intelligible for users interacting with the system. This interpretation goes beyond a literal explanation. Further, understanding is rarely an end-goal in itself. Pragmatically, it is more useful to operationalize the effectiveness of explanations in terms of a specific notion of usefulness or explanatory goals such as improved decision support or user trust. One aspect of intelligibility of an explainable system (often cited for domains such as health) is the ability\n\nfor users to accurately identify, or correct, an error made by the system. In that case it may be preferable to generate explanations that induce appropriate levels of reliance (in contrast to over- or under-reliance), supporting the user in discarding advice when the system is incorrect, but also accepting correct advice.\n\nThe following subjects will be discussed:\n(1) Intrinsically interpretable models, e.g., decision trees, decision rules, linear regression.\n(2) Identification of violations of assumptions; such as distribution of features, feature interaction, non-linear relationships between features; and what to do about them.\n(3) Model agnostic explanations, e.g., LIME, scoped Rules (Anchors), SHAP (and Shapley values)\n(4) Ethics for explanations, e.g., fairness and bias in data, models, and outputs.\n(5) (Adaptive) User Interfaces for explainable AI\n(6) Evaluation of explanation understandability\n\nPrerequisites\nData Mining or Advanced Concepts in Machine Learning.\n\nRecommended reading\nMolnar, Christoph. Interpretable Machine Learning. Lulu. com, 2020.\nRothman, Denis. Hands-On Explainable AI (XAI) with Python: Interpret, visualize, explain, and integrate reliable AI for fair, secure, and trustworthy AI apps, Packt, 2020.\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/464255/explainable-ai" . . "Presential"@en . "FALSE" . . "Dynamic game theory"@en . . "6.0" . "The course will focus on non-cooperative games and on dynamic games in the following order: matrix and bimatrix games, repeated games, Stackelberg games, differential games, specific models of stochastic games, evolutionary games. These are games in which the players are acting as strategic decision makers, who cannot make binding agreements to achieve their goals. Instead, threats may be applied to establish stable outcomes. Besides, relations with population dynamics and with “learning” will be examined. Several examples will be taken from biological settings.\n\nPrerequisites\nDesired Prior Knowledge: Students are expected to be familiar with basic concepts from linear algebra, calculus, Markov chains and differential equations.\n\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/465747/dynamic-game-theory" . . "Presential"@en . "FALSE" . . "Planning and scheduling"@en . . "6.0" . "In many real-world processes, particularly in industrial processes and logistics, decisions need to be taken about the time of the completion of (sub)tasks, and the decision about what production machines complete the tasks. There are often constraints on the order in which tasks, or ‘jobs’ can be performed, and there are usually capacity constraints of the machines. This leads to natural, industrially critical optimization problems. For example, a company might choose to buy many machines to process jobs, but then there is a risk that the machines will be underused, which is economically inefficient. On the other hand, too few machines, or an inappropriate ordering of tasks, may lead to machines spending a significant amount of time standing idle, waiting for the output of other machines, which are overcrowded with tasks. In this course, we look at various mathematical models and techniques for optimizing planning and scheduling problems, subject to different optimality criteria. We will discuss, among others, single-machine models, parallel-machine models, job-shop models, and algorithms for planning and scheduling (exact, approximate, heuristic) and we also touch upon the computational complexity (distinguishing between ‘easy’ and ‘difficult’ problems) of the underlying problems. Last but not least, we will also introduce integer linear programming as a uniform and generic tool to model and solve planning and scheduling problems.\n\nPrerequisites\nNone.\n\nDesired prior knowledge: Data Structures & Algorithms. Discrete Mathematics. Graph Theory\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/463263/planning-and-scheduling" . . "Presential"@en . "FALSE" . . "Building and mining knowledge graphs"@en . . "6.0" . "Knowledge graphs are large-scale, machine-processable representations of entities, their attributes, and their relationships. Knowledge graphs enable both people and machines to explore, understand, and reuse information in a wide variety of applications such as answering questions, finding relevant content, understanding social structures, and making scientific discoveries. However, the sheer size and complexity of these graphs present a formidable challenge particularly when mining across different topic areas.\n\nIn this course, we will examine approaches to construct and use knowledge graphs across a diverse set of applications using cutting-edge technologies such as machine learning and deep learning, graph databases, ontologies and automated reasoning, and other relevant techniques in the area of data mining and knowledge representation.\n\nPrerequisites\nDesired Prior Knowledge: Introduction to Computer Science\n\nRecommended reading\nAggarwal, C.C. and Wang, H. eds., (2010) Managing and mining graph data (Vol. 40). New York: Springer. ISBN 978-1-4419-6045-0\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/464631/building-and-mining-knowledge-graphs" . . "Presential"@en . "FALSE" . . "Information retrieval and text mining"@en . . "6.0" . "Using today’s search engines allows us to find the needle in the haystack much easier than before. But how do you find out what the needle looks like and where the haystack is? That is exactly the problem we will discuss in this course. An important difference with standard information retrieval (search) techniques is that they require a user to know what he or she is looking for, while text mining attempts to discover information that is not known beforehand. This is very relevant, for example, in criminal investigations, legal discovery, (business) intelligence, sentiment- & emotion mining or clinical research. Text mining refers generally to the process of extracting interesting and non-trivial information and knowledge from unstructured text. Text mining encompasses several computer science disciplines with a strong orientation towards artificial intelligence in general, including but not limited to information retrieval (building a search engine), statistical pattern recognition, natural language processing, information extraction and different methods of machine learning (including deep learning), clustering and ultimately integrating it all using advanced data visualization and chatbots to make the search experience easier and better.\n\nIn this course we will also discuss ethical aspect of using Artificial Intelligence for the above tasks, including the need for eXplainable AI (XAI), training deep-learning large language-models more energy efficient, and several ethical problems that may arise related to bias, legal, regulatory and privacy challenges.\n\nThis course is closely related with the course Advanced Natural Language Processing (ANLP). In the ANLP course, the focus is more on advanced methods and architectures to deal with complex natural language tasks such as machine translation, and Q&A systems. IRTM focusses more on building search engines and using text-analytics to improve the search experience. In the IRTM course, we will use a number of the architectures that are discussed in more detail in ANLP. The overlap between the two courses is kept to a minimum. There is no need to follow the courses in a specific order.\n\nPrerequisites\nNone.\n\nRecommended reading\nIntroduction to Information Retrieval. Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze. Cambridge University Press, 2008. In bookstore and online: http://informationretrieval.org.\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/464235/information-retrieval-and-text-mining" . . "Presential"@en . "FALSE" . . "Introduction to quantum computing for ai and data science"@en . . "6.0" . "In this course we lay down the foundations and basic concepts of quantum computing. We will use the mathematical formalism borrowed from quantum mechanics to describe quantum systems and their interactions. We introduce the concept of a quantum bit and discuss different physical realizations of it. We then introduce the basic building blocks of quantum computing: quantum measurements and quantum circuits, single and multi-qubit gates, the difference between correlated (entangled) and uncorrelated states and their representation, quantum communication, and basic quantum protocols and quantum algorithms. Finally, we discuss the different types of noise involved in real quantum computers (coherent and incoherent errors, state preparation, projection and measurement) and their effect on performance, and outline current efforts for mitigating the issues.\n\n!! This course is a prerequisite for the planned elective courses Quantum Algorithms, Quantum AI, and Quantum Information and Security, which will be offered in Semester 1 of the upcoming academic year 2024-2025. These four courses, together with a dedicated research project on quantum computing forms the specialization in Quantum Computing for AI and Data Science.\n\nPrerequisites\nNone.\n\nDesired prior knowledge: probability theory, linear algebra, design and analysis of algorithms\n\n!! This course is a prerequisite for the planned elective courses Quantum Algorithms, Quantum AI, and Quantum Information and Security, which will be offered in Semester 1 of the upcoming academic year 2024-2025. These four courses, together with a dedicated research project on quantum computing forms the specialization in Quantum Computing for AI and Data Science.\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/477741/introduction-quantum-computing-ai-and-data-science" . . "Presential"@en . "FALSE" . . "Symbolic computation and control"@en . . "6.0" . "This course consists of two interrelated parts. In the first part, we focus on basic techniques for the digital control of linear dynamical systems using feedback. We start by addressing system stability and we discuss the technique of pole placement by state feedback to solve the regulation problem. Then we introduce state observers to solve the regulation problem by output feedback. Next, we extend our scope to tracking problems. This involves the design of additional dynamics to characterize the relevant class of reference signals, which are then integrated with the earlier set-up for output feedback. Finally, we discuss the classical topic of optimal control, which can be employed to avoid using prototype systems for pole placement, and which allows the user to design a feedback law by trading off the cost involved in generating large inputs against the achieved tracking accuracy. In the second part, we address computational issues, related to the field of systems and control. Classically, computers have been designed primarily to perform approximate numerical arithmetic. Modern software packages for mathematical computation, such as Maple and Mathematica, allow one to perform exact and symbolic computation too. We shall explore this new area. It is demonstrated how speed, efficiency and memory usage considerations often lead to surprising and fundamentally different algorithmic solutions in a symbolic or exact context. Applications and examples involve stability of linear systems, model approximation, and linear matrix equations with free parameters. Practical classes serve to demonstrate the techniques and to make the student familiar with exact and symbolic computation.\n\nPrerequisites\nDesired Prior Knowledge: Linear Algebra, Calculus, Mathematical Modelling.\n\nRecommended reading\nRichard J. Vaccaro, Digital Control - A State-Space Approach, McGraw-Hill International Editions, 1995. ISBN 0-07-066781-0.\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/463211/symbolic-computation-and-control" . . "Presential"@en . "FALSE" . . "Computer vision"@en . . "6.0" . "Can we make machines look, understand and interpret the world around them? Can we make cars that can autonomously navigate in the world, robots that can recognize and grasp objects and, ultimately, recognize humans and communicate with them? How do search engines index and retrieve billions of images? This course will provide the knowledge and skills that are fundamental to core vision tasks of one of the fastest growing fields in academia and industry: visual computing. Topics include introduction to fundamental problems of computer vision, mathematical models and computational methodologies for their solution, implementation of real-life applications and experimentation with various techniques in the field of scene analysis and understanding. In particular, after a recap of basic image analysis tools (enhancement, restoration, color spaces, edge detection), students will learn about feature detectors and trackers, fitting, image geometric transformation and mosaicing techniques, texture analysis and classification using unsupervised techniques, face analysis, deep learning based object classification, detection and tracking, camera models, epipolar geometry and 3D reconstruction from 2D views.\n\nPrerequisites\nNone.\n\nDesired prior knowledge: Basic knowledge of Python, linear algebra and machine learning. This course offers the basics on image processing although prior knowledge is also a plus.\n\nRecommended reading\n“Digital Image Processing”, Rafael C. Gonzalez & Richard E. Woods, Addison-Wesley, “Computer Vision: Models, Learning and Inference”, Simon J.D. Prince 2012.\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/466719/computer-vision" . . "Presential"@en . "FALSE" . . "Intelligent search & games"@en . . "6.0" . "In this course, the students learn how to apply advanced techniques in the framework of game-playing programs. Depending on the nature of the game, these techniques can be of a more or less algorithmic nature. The following subjects will be discussed:\n(1) Basic search techniques. Alpha-beta; A*.\n(2) Advanced search techniques. IDA*; B*, transposition tables; retrograde analysis and endgame databases; proof-number search and variants; multi-player search methods; Expectimax and *-minimax variants.\n(3) Heuristics. World representations; killer moves; history heuristic, PVS; windowing techniques; null-moves; forward-pruning techniques; selective search, GOAP.\n(4) Monte Carlo methods. Monte Carlo tree search (MCTS) techniques, enhancements and applications; AlphaGo and AlphaZero approaches.\n(5) (5) Game design. Evolutionary game design; game quality metrics; self-play evaluation; procedural content generation (PCG); puzzle design. \n\nPrerequisites\nNone.\n\nDesired Prior Knowledge: Data Structures & Algorithms.\n\nRecommended reading\nMillington, I. and Funge, J. (2009). Artificial Intelligence for Games, 2nd Edition Morgan Kaufmann Publishers, ISBN: 978-0123747310\nRussell, S.J. and Norvig, P. (2010). Artificial Intelligence: A Modern Approach, 3rd edition. Pearson Education, New Jersey. ISBN 0-13-207148-7.\nYannakakis, G.N. and Togelius, J. (2018) Artificial Intelligence and Games, Springer, Berlin. ISBN 978-3-319-63519-4 (eBook) 978-3-319-63518-7 (hardcover)\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/465981/intelligent-search-games" . . "Presential"@en . "FALSE" . . "Agents and multi-agent systems"@en . . "6.0" . "The notion of an (intelligent) agent is fundamental to the field of artificial intelligence. Thereby an agent is viewed as a computational entity such as a software program or a robot that is situated in some environment and that to some extent is able to act autonomously in order to achieve its design objectives. The course covers important conceptual, theoretical and practical foundations of single-agent systems (where the focus is on agent-environment interaction) and multi-agent systems (where the focus is on agent-agent interaction). Both types of agent-based systems have found their way to real-world applications in a variety of domains such as e-commerce, logistics, supply chain management, telecommunication, health care, and manufacturing. Examples of topics treated in the course are agent architectures, computational autonomy, game-theoretic principles of agent-based systems, coordination mechanisms (including auctions and voting), and automated negotiation and argumentation. Other topics such as ethical or legal aspects raised by computational agency may also be covered. In the exercises and in the practical part of the course students have the opportunity to apply the covered concepts and methods.\n\nPrerequisites\nDesired Prior Knowledge: Basic knowledge and skills in programming.\n\nRecommended reading\nStuart Russell and Peter Norvig (2010). Artificial Intelligence. A Modern Approach. 3rd edition. Prentice Hall.\nGerhard Weiss (Ed.) (2013, 2nd edition): Multi-agent Systems. MIT Press.\nMike Wooldridge (2009, 2nd edition): An Introduction to Multi Agent Systems, John Wiley & Sons Ltd.\nYoav Shoham and Kevin Leyton-Brown (2009): Multi-agent Systems. Algorithmic, Game-Theoretic, and Logical Foundations, Cambridge University Press.\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/462431/agents-and-multi-agent-systems" . . "Presential"@en . "FALSE" . . "Autonomous robotic systems"@en . . "6.0" . "Operating autonomously in unknown and dynamically changing environments is a core challenge that all robotic systems must solve to work successfully in industrial, public, and private areas. Currently popular systems that must demonstrate such capabilities include self-driving cars, autonomously operating drones, and personal robotic assistants. In this course, students obtain deep knowledge in creating autonomous robotic systems that can operate in and manipulate unknown and dynamically changing environments by autonomously planning, analysing, mapping, and modelling of such environments. Students learn to approach these challenging tasks through three main techniques: swarm intelligence, model-based probabilistic frameworks, and (mostly) model-free techniques from artificial evolution and machine learning.\n\nPrerequisites\nNone.\n\nDesired Prior Knowledge: Discrete Mathematics, Linear Algebra, Probabilities and Statistics, Data Structures and Algorithms, Machine Learning, Search Techniques.\n\nRecommended reading\nFloreano and Nolfi (2000), Evolutionary Robotics, The MIT press. ISBN-13: 978-0262640565.\nDario Floreano und Claudio Mattiussi (2008), Bio-Inspired Artificial Intelligence: Theories, Methods, and Technologies, ISBN-13: 978-0262062718\n\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/463161/autonomous-robotic-systems" . . "Presential"@en . "FALSE" . . "Reinforcement learning"@en . . "6.0" . "Reinforcement learning is a type of machine learning problem in which the learner gets a (delayed) numerical feedback signal about its demonstrated performance. It is the toughest type of machine learning problem to solve, but also the one that best encompasses the idea of artificial intelligence as a whole. In this course we will define the components that make up a reinforcement learning problem and will see what the important concepts are when trying to solve such a problem, such as state and action values, policies and performance feedback. We will look at the different properties a reinforcement learning problem can have and what the consequences of these properties are with respect to solvability. We will discuss value based techniques as well as direct policy learning and learn how to implement these techniques. We will study the influence of generalisation on learning performance and see how supervised learning (and specifically deep learning) can be used to help reinforcement learning techniques tackle larger problems. We will also look at the evaluation of learned policies and the development of performance over time.\n\nPrerequisites\nNo hard prerequisites but having some background in Machine Learning and/or Data Mining will be helpful.\n\nRecommended reading\nLecture slides will be uploaded before each lecture. These slides are designed and intended as support during teaching, not as study material by themselves. They are supplied as a service, but additional note taking will be necessary to pass the class.\n\nThe book “Reinforcement Learning – An Introduction” by Sutton and Barto is freely available at: https://www.andrew.cmu.edu/course/10-703/textbook/BartoSutton.pdf\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/477745/reinforcement-learning" . . "Presential"@en . "FALSE" . . "Master in Data Science for Decision Making"@en . . "https://curriculum.maastrichtuniversity.nl/education/partner-program-master/data-science-decision-making" . "120"^^ . "Presential"@en . "Data Science for Decision Making will familiarise you with methods, techniques and algorithms that can be used to address major issues in mathematical modelling and decision making. You will also get hands-on experience in applying this knowledge through computer classes, group research projects and the thesis research. The unique blend of courses will equip you with all the knowledge and skills you’ll need to have a successful career.\n\nWidespread applications\nData Science for Decision Making links data science with making informed decisions. It has widespread applications in business and engineering, such as scheduling customer service agents, optimising supply chains, discovering patterns in time series and data, controlling dynamical systems, modelling biological processes, finding optimal strategies in negotiation and extracting meaningful components from brain signals. This means you'll be able to pursue a career in many different industries after you graduate.\n\nProgramme topics\nData Science for Decision Making covers the following topics:\n\n* production planning, scheduling and supply chain optimisation\n* modelling and decision making under randomness, for instance in queuing theory and simulation\n* signal and image processing with emphasis on wavelet analysis and applications in biology\n* algorithms for big data\n* estimation and identification of mathematical models, and fitting models to data\n* dynamic game theory, non-cooperative games and strategic decision making with applications in evolutionary game theory and biology\n* feedback control design and optimal control, for stabilisation and for tracking a desired behaviour\n* symbolic computation and exact numerical computation, with attention to speed, efficiency and memory usage\n* optimisation of continuous functions and of problems of a combinatorial nature"@en . . . "2"@en . "FALSE" . . "Master"@en . "Thesis" . "2314.00" . "Euro"@en . "18400.00" . "Recommended" . "Data science and big data are very important to companies nowadays, and this programme will provide you with all the training you’ll need be active in these areas. The comprehensive education, practical skills and international orientation of the programme will open the world to you. When applying for positions, graduates from Data Science for Decision Making are often successful because of their problem-solving attitude, their modern scientific skills, their flexibility and their ability to model and analyse complex problems from a variety of domains.\n\nGraduates have found positions as:\n* Manager Automotive Research Center at Johnson Electric\n* Creative Director at Goal043 | Serious Games\n* Assistant Professor at the Department of Advanced Computing Sciences, Maastricht University\n* BI strategy and solutions manager at Vodafone Germany\n* Scientist at TNO\n* Digital Analytics Services Coordinator at PFSweb Europe\n* Software Developer at Thunderhead.com\n* Data Scientist at BigAlgo\n* Researcher at Thales Nederland"@en . "2"^^ . "TRUE" . "Midstream"@en . . . . . . . . . . . . . . . . . . . . . . . . . . . "Faculty of Science and Engineering"@en . .