. "Explainable ai"@en . . "6.0" . "A key component of an artificially intelligent system is the ability to explain to a human agent the decisions, recommendations, predictions, or actions made by it and the process through which they are made. Such explainable artificial intelligence (XAI) can be required in a wide range of applications. For example, a regulator of waterways may use a decision support system to decide which boats to check for legal infringements, a concerned citizen might use a system to find reliable information about a new disease, or an employer might use an artificial advice-giver to choose between potential candidates fairly. For explanations from intelligent systems to be useful, they need to be able to justify the advice they give in a human-understandable way. This creates a necessity for techniques for automatic generation of satisfactory explanations that are intelligible for users interacting with the system. This interpretation goes beyond a literal explanation. Further, understanding is rarely an end-goal in itself. Pragmatically, it is more useful to operationalize the effectiveness of explanations in terms of a specific notion of usefulness or explanatory goals such as improved decision support or user trust. One aspect of intelligibility of an explainable system (often cited for domains such as health) is the ability\n\nfor users to accurately identify, or correct, an error made by the system. In that case it may be preferable to generate explanations that induce appropriate levels of reliance (in contrast to over- or under-reliance), supporting the user in discarding advice when the system is incorrect, but also accepting correct advice.\n\nThe following subjects will be discussed:\n(1) Intrinsically interpretable models, e.g., decision trees, decision rules, linear regression.\n(2) Identification of violations of assumptions; such as distribution of features, feature interaction, non-linear relationships between features; and what to do about them.\n(3) Model agnostic explanations, e.g., LIME, scoped Rules (Anchors), SHAP (and Shapley values)\n(4) Ethics for explanations, e.g., fairness and bias in data, models, and outputs.\n(5) (Adaptive) User Interfaces for explainable AI\n(6) Evaluation of explanation understandability\n\nPrerequisites\nData Mining or Advanced Concepts in Machine Learning.\n\nRecommended reading\nMolnar, Christoph. Interpretable Machine Learning. Lulu. com, 2020.\nRothman, Denis. Hands-On Explainable AI (XAI) with Python: Interpret, visualize, explain, and integrate reliable AI for fair, secure, and trustworthy AI apps, Packt, 2020.\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/464255/explainable-ai" . . "Presential"@en . "FALSE" . . "Others"@en . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . "Master in Data Science for Decision Making"@en . . "https://curriculum.maastrichtuniversity.nl/education/partner-program-master/data-science-decision-making" . "120"^^ . "Presential"@en . "Data Science for Decision Making will familiarise you with methods, techniques and algorithms that can be used to address major issues in mathematical modelling and decision making. You will also get hands-on experience in applying this knowledge through computer classes, group research projects and the thesis research. The unique blend of courses will equip you with all the knowledge and skills you’ll need to have a successful career.\n\nWidespread applications\nData Science for Decision Making links data science with making informed decisions. It has widespread applications in business and engineering, such as scheduling customer service agents, optimising supply chains, discovering patterns in time series and data, controlling dynamical systems, modelling biological processes, finding optimal strategies in negotiation and extracting meaningful components from brain signals. This means you'll be able to pursue a career in many different industries after you graduate.\n\nProgramme topics\nData Science for Decision Making covers the following topics:\n\n* production planning, scheduling and supply chain optimisation\n* modelling and decision making under randomness, for instance in queuing theory and simulation\n* signal and image processing with emphasis on wavelet analysis and applications in biology\n* algorithms for big data\n* estimation and identification of mathematical models, and fitting models to data\n* dynamic game theory, non-cooperative games and strategic decision making with applications in evolutionary game theory and biology\n* feedback control design and optimal control, for stabilisation and for tracking a desired behaviour\n* symbolic computation and exact numerical computation, with attention to speed, efficiency and memory usage\n* optimisation of continuous functions and of problems of a combinatorial nature"@en . . . "2"@en . "FALSE" . . "Master"@en . "Thesis" . "2314.00" . "Euro"@en . "18400.00" . "Recommended" . "Data science and big data are very important to companies nowadays, and this programme will provide you with all the training you’ll need be active in these areas. The comprehensive education, practical skills and international orientation of the programme will open the world to you. When applying for positions, graduates from Data Science for Decision Making are often successful because of their problem-solving attitude, their modern scientific skills, their flexibility and their ability to model and analyse complex problems from a variety of domains.\n\nGraduates have found positions as:\n* Manager Automotive Research Center at Johnson Electric\n* Creative Director at Goal043 | Serious Games\n* Assistant Professor at the Department of Advanced Computing Sciences, Maastricht University\n* BI strategy and solutions manager at Vodafone Germany\n* Scientist at TNO\n* Digital Analytics Services Coordinator at PFSweb Europe\n* Software Developer at Thunderhead.com\n* Data Scientist at BigAlgo\n* Researcher at Thales Nederland"@en . "2"^^ . "TRUE" . "Midstream"@en . . . . . . . . . . . . . . . . . . . . . . . . . .