. "Data Science, Data Analysis, Data Mining"@en . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . "Advances in data mining"@en . . "6" . "The traditional data mining techniques are mainly focused on solving classification, regression and clustering problems. However, the recent developments in ICT led to the emergence of new sorts of massive data sets and related data mining problems. Consequently, the field of data mining has rapidly expanded to cover new areas of research, such as:\r\n\r\nprocessing huge (tera- or petabytes big) data sets\r\nreal-time analysis of data streams (internet traffic, sensor data, electronic transactions, etc.),\r\nsearching for similar pairs of objects such as texts, images, songs, etc., in huge collections of such objects,\r\nfinding anomalies in data,\r\nclustering of massive sets of records,\r\nrecommendation systems,\r\nreduction of data dimensionality\r\napplications of DeepLearning to data mining\r\n\r\nDuring the course you will learn several techniques, algorithms and tools for addressing these new and challenging data mining problems:\r\n\r\nRecommender Systems: Collaborative Filtering, MatrixFactorization\r\nAlgorithms for dimensionality reduction: LLE, t-SNE, UMAP\r\nRandomForest and XGBoost: the most popular algorithms for classification and regression trees\r\nAlgorithms for detecting anomalies in data\r\nLocality Sensitive Hashing (LSH): a general technique for finding similar items in huge collections of items\r\nAlgorithms for mining data streams: sampling, filtering (Bloom filters), probabilistic counting\r\nApplications of DeepLearning to data mining\r\nDistributed Processing of Massive Data: Hadoop, MapReduce, Spark\n\nOutcome:\nAfter completing the course, the students should:\r\n\r\nknow most successful algorithms and techniques used in Data Mining;\r\ngain some hands-on experience with several algorithms for mining complex data sets;\r\nbe able to apply the acquired knowledge and skills to new problems." . . "Presential"@en . "TRUE" . . "Data analysis in astronomy and physics"@en . . "6" . "to recognize different types of astronomy and physics data analysis problems\n how to translate these types of problems into a statistical model, and understand the limitations of the model\n how to implement the statistical model in Python using existing libraries and using real-world astronomical and physics datasets\n how to critically assess the numerical results, and quantify the uncertainties of the estimates and the predictions\n how to select the most optimal model\n how to visualize the dataset, the model parameters and their uncertainties, and predictions" . . "Presential"@en . "TRUE" . . "Visual geodata mining"@en . . "2" . "This course will be held by visiting professor Jukka Matthias Krisp from University of Augsburg (Germany). The course will be giving brief overview of visual geodata mining. During the course students will get reading materials which later on will follow with discussion. Course also contain exercises with visual data mining programme GeoVista/GeoViz which will be held in the computer room and supervised by prof. Krisp.\n\nOutcome:\nAfter successful passing the student:\r\n* understand applications and methods to \"visual data mining\"\r\n* assess visual data mining tools (anticipated \"GeoVista/GeoViz\")\r\n* understand the overall \"visual mining process\"\r\n* use methods and applications of \"visual spatial data mining\"\r\n* evaluate methods of \"visual spatial data mining" . . "Hybrid"@en . "FALSE" . . "3d modelling and analysis"@en . . "6" . "The course consists of combined theoretical and practical study. The topics are: creation and analysis of terrain and surface models, static and dynamic models and visualisations; visibility analysis; hydrological and flood analysis; 3D modelling, analysis and visualisation in city and landscape planning.\n\nOutcome:\nKnows primary principles of handling and presenting 3D data in terrain and surface, incl. city modelling.\r\nKnows primary data structures used in 3D modelling.\r\nUnderstands and is able to apply basic methods and techniques of 3D modelling and analysis of terrain and various surfaces.\r\nIs able to visualize modelling results in 3D scenes.\r\nKnows implementations of 3D modelling and analysis." . . "Hybrid"@en . "FALSE" . . "Introduction to data science and analytics"@en . . "8" . "This course will examine how data analysis \ntechnologies can be used to improve decision-making. \nThe aim is to study the fundamental principles and \ntechniques of data science, and we will examine real\u0002world examples and cases to place data science \ntechniques in context, to develop data-analytic \nthinking, and to illustrate that proper application is as \nmuch an art as it is a science. In addition, this course \nwill work hands-on with the Python programming \nlanguage and is associated data analysis libraries.\n\nOutcome: Not Provided" . . "Presential"@en . "TRUE" . . "Statistical simulation and data analysis"@en . . "8" . "The students will be introduced to the R programming \r\nlanguage, a programming language that was \r\nspecifically developed for analyzing data, and \r\nis today widely used in most organizations that \r\nconduct data analysis. The students will learn how \r\nto explore datasets in R, using basic visualization \r\ntools and summary statistics, how to run different \r\nkinds of regressions and analyses, and how to \r\nperform statistical inference in practice, for example \r\nhow to test certain hypotheses regarding the data or \r\nhow to compute confidence intervals for quantities \r\nof interest. The students will also learn how to use R \r\nin order to conduct simulations, an extremely useful \r\ntool that can fulfill a wide range of analytical tasks. \r\nSimulation techniques covered will include Monte \r\nCarlo, importance sampling and rejection sampling. \r\nFinally, the students will learn how to estimate the \r\nprecision of computed sample statistics using \r\nresampling methods. The course uses a hands-on \r\napproach, with nearly half the work done in the lab.\n\nOutcome: Not Provided" . . "Presential"@en . "TRUE" . . "Big data analytics"@en . . "8" . "This course seeks a balance between foundational \r\nbut relatively basic material in algorithms, statistics, graph theory and related fields, with real-world \r\napplications inspired by the current practice of \r\ninternet and cloud services. Specifically, this \r\ncourse will look at social & information networks, \r\nrecommender systems, clustering and community \r\ndetection, search/retrieval/topic models, \r\ndimensionality reduction, stream computing, and \r\nonline ad auctions. Together, these provide a \r\ngood coverage of the main uses for data mining \r\nand analytics applications in social networking, \r\ne-commerce, social media, etc. The course is a \r\ncombination of theoretical materials and weekly \r\nlaboratory sessions, where several large-scale \r\ndatasets from the real world will be explored. \r\nFor this, students will work with a dedicated \r\ninfrastructure based on Hadoop & Apache Spark.\n\nOutcome: Not Provided" . . "Presential"@en . "TRUE" . . "Capstone project in data science (1st phase)"@en . . "5" . "The capstone project has been designed to apply \r\nknowledge into practice and to develop and \r\nimprove critical skills such as problem-solving and \r\ncollaboration skills. Students are matched with \r\nresearch labs within the UCY community and with \r\nindustry partners to investigate pressing issues, \r\napplying data science areas. Capstone projects aim \r\nto give students some professional experience in a \r\nreal work environment and help enhance their soft \r\nskills. These projects involve groups of roughly 3-4 \r\nstudents working in partnership.\r\nThe process is the following:\r\n• A short description of projects are announced \r\nto students. \r\n• Students bid up to three projects taking into \r\naccount the fields of their interest or research.\r\n• The data science directors make the final \r\nassignment of projects to students. The projects \r\nare under the supervision of a member of the \r\nProgramme’s academic staff.\r\n• Specific learning outcomes are stipulated in a \r\nlearning agreement between the student, the \r\nsupervisor and the company.\r\n• The student keeps a log file of his/her work and \r\nat the end writes a progress report (6000 words).\r\n• The company is obliged to monitor the progress \r\nof the students and to provide relevant \r\nmentorship.\r\nFinal assessment is carried out by the company and \r\nthe supervisor.\n\nOutcome: Not Provided" . . "Presential"@en . "TRUE" . . "Capstone project in data science (2nd phase)"@en . . "5" . "The capstone project has been designed to apply \r\nknowledge into practice and to develop and \r\nimprove critical skills such as problem-solving and \r\ncollaboration skills. Students are matched with \r\nresearch labs within the UCY community and with \r\nindustry partners to investigate pressing issues, \r\napplying data science areas. Capstone projects aim \r\nto give students some professional experience in a \r\nreal work environment and help enhance their soft \r\nskills. These projects involve groups of roughly 3-4 \r\nstudents working in partnership.\r\nThe process is the following:\r\n• A short description of projects are announced \r\nto students. \r\n• Students bid up to three projects taking into \r\naccount the fields of their interest or research.\r\n• The data science directors make the final \r\nassignment of projects to students. The projects \r\nare under the supervision of a member of the \r\nProgramme’s academic staff.\r\n• Specific learning outcomes are stipulated in a \r\nlearning agreement between the student, the \r\nsupervisor and the company.\r\n• The student keeps a log file of his/her work and \r\nat the end writes a progress report (6000 words).\r\n• The company is obliged to monitor the progress \r\nof the students and to provide relevant \r\nmentorship.\r\nFinal assessment is carried out by the company and \r\nthe supervisor.\n\nOutcome: Not Provided" . . "Presential"@en . "TRUE" . . "Information retrieval and search engines"@en . . "8" . "_Introduction to Information Retrieval\r\n_Boolean Retrieval\r\n_Text encoding: tokenisation, stemming, \r\nlemmatisation, stop words, phrases. \r\n_Dictionaries and Tolerant retrieval \r\n_Index Construction and Compression\r\n_Scoring and Term Weighting\r\n_Vector Space Retrieval\r\n_Evaluation in information retrieval \r\n_Relevance feedback/query expansion \r\n_Text classification and Naive Bayes\r\n_Vector Space Classification\r\n_Flat and Hierarchical Clustering\r\n_Web Search Basics \r\n_Web crawling and indexes\r\n_Link Analysis\n\nOutcome: Not Provided" . . "Presential"@en . "TRUE" . . "Data visualization"@en . . "8" . "Introduction to Data visualization, Web development, \r\nJavascript, Data driven documents (D3.js), \r\nInteraction, filtering, aggregation, Perception, \r\ncognition, Designing visualizations (UI/UX), Text \r\nvisualization, Graphs, Tabular data viz Music viz, \r\nIntroduction to scientific visualization, Storytelling \r\nwith data / data journalism, Creative coding.\n\nOutcome: Not Provided" . . "Presential"@en . "TRUE" . . "Data mining for business analytics"@en . . "8" . "Enterprises, organizations and individuals are \r\ncreating, collecting, and using massive amount \r\nof structured and unstructured data with the \r\ngoal to convert the information into knowledge, \r\nto improve the quality and the efficiency of their \r\ndecision-making process, and to better position \r\nthemselves to the highly competitive marketplace. \r\nData mining is the process of finding, extracting, \r\nvisualizing and reporting useful information and \r\ninsights from both small and large datasets with \r\nthe help of sophisticated data analysis methods. \r\nIt is part of the business analytics, which refers \r\nto the process of leveraging different forms of \r\nanalytical techniques to achieve desired business \r\noutcomes through requiring business relevancy, \r\nactionable insight, performance management, and \r\nvalue management. The students in this course will \r\nstudy the fundamental principles and techniques of \r\ndata mining. They will learn how to apply advanced \r\nmodels and software applications for data mining. \r\nFinally, students will learn how to examine the overall \r\nbusiness process of an organization or a project with \r\nthe goal to understand (i) the business context where \r\nhidden internal and external value is to be identified \r\nand captured, and (ii) exactly what the selected data \r\nmining method does\n\nOutcome: Not Provided" . . "Presential"@en . "TRUE" . . "Data mining and knowledge discovery"@en . . "6" . "Data Mining defines a process of mining potentially useful information from data. In most cases it is defined as knowledge discovery from large databases. Data Mining is a technology, which unites traditional data analysis methods with modern algorithms in order to process large amounts of data. This brings a wide range of possibilities for studying and analyzing new and existent types of data, applying new methods. In the scope of the present course topics as Data Preprocessing Technologies; Classification and Cluster analysis methods and algorithms; Approaches for processing and analyzing short time series; Pattern mining in sequence data methods and algorithms; Fuzzy logic and Swarm intelligence will be presented. As also possible areas of Data Mining application will be discussed.\n\nOutcome:\nAble to define preprocessing steps and select appropriate methods - Test, individual practical task\r\nAble to build an apply classification and clustering models for knowledge discovery - Test, individual practical task\r\nAble to analyse short time series using data mining technologies - Test, individual practical task\r\nAble to build, train and apply artificial neural networks for knowledge discovery - Test, individual practical task\r\nAble to implement fuzzy logic into data mining methods and mine knowledge in fuzzy data - Test, individual practical task" . . "Presential"@en . "TRUE" . . "Big data technologies"@en . . "5" . "The discipline \"Big Data Technologies\" is developed \r\nthematically in the areas of mathematical and conceptual models of big data analysis knowledge \r\ndata discovery (KDD), technological frameworks and software tools for big data analytics, \r\nprogramming languages for analytical models, technological frameworks and streaming \r\nsoftware tools for the analysis of Big data streams.\n\nOutcome:\nstudents will:\r\n• Gain knowledge about the innovative ecosystem of Big data, the conceptual models of this \r\necosystem, the types of big data analytics according to the depth of knowledge, modern \r\nplatforms, technological frameworks and software tools for big data analysis and knowledge\r\ndiscovery, software libraries and knowledge data discovery tools.\r\n• Acquire skills for implementation of computer models and software applications for \r\nknowledge data discovery based on analysis of Big data." . . "Presential"@en . "TRUE" . . "Statistical methods for big data analysis"@en . . "4" . "The main topics concern: tests for the statistical \r\nhypothesizes, power of the tests, multiple linear and nonlinear regression, multiple analyses of \r\nvariance, testing of non-parametrical hypothesis, survival analysis.\n\nOutcome: Not Provided" . . "Presential"@en . "TRUE" . . "Cloud platform and services for big data"@en . . "3" . "Cloud Platform and Services for Big Data is a \r\nspecialized course for the students of the specialty \"Big Data Analytics\". It considers the \r\ncapabilities and challenges of computing distributed component-based and service-oriented \r\narchitectures, cloud-based services and cloudware, application programming interfaces, \r\ntaxonomy and cloud-based platforms, application development and integration technologies, \r\ndata centers and cloud computing, specific aspects such as computational load balance, \r\ndistributed transactions, authentication and authorization. Another focus is the design and \r\nimplementation of portals for the provision of services through containers of portlets as well as \r\nthe implementation of workflows. The theoretical part covers modern cloud service platforms \r\nworldwide, as well as methods and tools for the development and integration of enterprise cloud \r\napplications. The practical part includes developing applications, designing and implementing \r\nportals with workflows of services. Open-source cloud computing environments, as well as \r\nRESTFul Web services, are used to develop effective applications.\r\n\nOutcome: Not Provided" . . "Presential"@en . "TRUE" . . "Big data analytics for precision medicine"@en . . "3" . "The discipline \"Big Data Analytics for Precision \r\nMedicine\" is developed thematically in the areas of in silico technologies, in silico knowledge \r\ndata discovery, Big data analytics from the ecosystem of the Internet of Medical Things (IomT), \r\nInternet of medical imaging Things, technologies for data analysis and knowledge data \r\ndiscovery from the ecosystem of Big data and Big streams of biological and medical data, cloud \r\ntechnologies and services in the health industry.\n\nOutcome: Not Provided" . . "Presential"@en . "TRUE" . . "Digital big data and computer forensics"@en . . "3" . "The course includes topics from criminology; \r\nregulations, standards and rules for working with evidence; microscopic and molecular \r\nbiological methods for identifying evidence; microscopic image recognition and classification \r\nsoftware; dactyloscopic data, digitization and organization of dactyloscopic data, automatic \r\nsearch software; standards and protocols for the exchange of fingerprints between different \r\nIAFIS, IDENT and EURODAC databases; genetic information as a method of identification; \r\nDNA profiling; software for organizing databases with reference DNA profiles; Biometric \r\nrecognition; handwriting recognition; signature recognition and image recognition.\n\nOutcome: Not Provided" . . "Presential"@en . "TRUE" . . "Inverse methods and data analysis"@en . . "6" . "The use of new sensors and autonomous observing systems has produced a wealth of high-quality data in all branches of environmental and space science. These data contain important information about distributions, fluxes or reaction rates of key properties in the universe. Inverting the datasets, e.g., calculating the underlying concentrations, fluxes and rate constants from the data, is an important aspect of data analysis, and a wide range of numerical methods is available for this task. This course offers an introduction to linear inverse methods. Techniques for the solution of under- and overdetermined systems of linear equations will be covered in detail. Examples of such systems are (1) linear and non-linear regression, (2) curve fitting, (3) factor analysis, (4) diagnostic tomography, (5) remote sensing from airplanes or satellites, and (6) models of atmospheric, oceanic, and space circulation and biogeochemistry. Contrary to square linear systems that are easy to solve, in general, under- and overdetermined linear systems exhibit complications: (1) the numbers of equations and unknowns differ, and (2) coefficients and right-hand-side of the equations usually are derived from measurements and thus contain errors. Basic techniques from numerical mathematics that solve these problems will be presented and explained extensively using examples from different fields. Error analysis will be of major concern. The examples cover different aspects of environmental and space research and should benefit students from the Postgraduate Environmental Physics program and newly started Masters Degree in Space Sciences and Technologies, as well as students from other fields of physics and geophysics. A basic knowledge of linear algebra is required.\n\nOutcome:\n- Techniques for the optimal solution of under- and over determined systems of linear equations\r\n- Methods for calculating variances and covariances of the solutions\r\n- Concepts of resolution (in solution as well as data) and methods to calculate them\r\n- Practical examples and applications to test data sets from remote sensing of the atmosphere, earth, outer space, and celestial bodies, as well as oceanography" . . "Presential"@en . "TRUE" . . "Mining social and geographic datasets"@en . . "5" . "Description\nWe constantly leave 'digital traces' in our daily lives, both in online and offline worlds; for example our posts in online social networks. Often, this information is associated to specific geographic locations. Examples are GPS trajectories collected using mobile devices or geolocalised posts in online social networks. This data can be collected, analysed and exploited for many practical applications with high commercial and societal impact. This course will provide an overview of the theoretical foundations, algorithms, systems and tools for mining and for discovering knowledge from social and geographic datasets, and, more in general, an introduction to the emerging field of Data Science. The module aims to equip student with the foundations as a data analyst/scientist to be able to analyse a wide array of social and geographic data in the future.\n\nLecture topics will possibly include: introduction to key concepts of data mining; an introduction to computing in Python; spatial network analysis for urban planning/design; mobility analysis and modelling; and an introduction to machine learning techniques on social media and sensing data with real-world case studies and applications." . . "Presential"@en . "FALSE" . . "Spatial-temporal data analysis and data mining (stdm)"@en . . "5" . "Description\nThis module introduces theories and techniques to visualise, model and analyse (big) spatio-temporal data. Students will be introduced to the topics of statistical modelling, data mining and machine learning, and will learn tools and techniques for spatio-temporal analysis, with an emphasis on application to real world problems. The module content covers a range of topics, which include: Exploratory spatio-temporal visualisation, Statistical modelling and forecasting, Clustering and outlier detection , Machine learning techniques (e.g. Support Vector Machines, Random Forests, Artificial Neural Networks and Deep Learning), Space-time multi-agent simulation, and Social media analysis. Lectures are supported by practical sessions, where real data is used to demonstrate the techniques, with applications such as environment, transport, crime and social media analysis. The software packages used include R (http://www.r-project.org/), SaTScan (http://www.satscan.org/), Python and NetLogo (https://ccl.northwestern.edu/netlogo/). The course is suitable for MSc students in GIS, Geospatial Analysis, Spatio-Temporal Analytics, Smart Cities, Computer Science and related subjects.\n\nLearning Outcomes\n\nUnderstand the basic principles and techniques of spatio-temporal analysis and modelling\nBe comfortable working with spatio-temporal data of different types in different application areas\nBe familiar with using R statistical package for space-time analysis, modelling and visualisation\nHave a working knowledge of other software such as SaTScan and NetLogo.\nBe able to apply the tools and techniques they have learned to new datasets." . . "Presential"@en . "FALSE" . . "Data mining"@en . . "6" . "Description:\n Teaching is performed by means of lectures and practices. Lectures are devoted to the theory of data mining. Practices are conducted in the computer class, where implementation of main algorithms is discussed using “R” language. The course consists of two parts. First part is devoted to the main notions and problems of data mining. Main notions and concept, such as, distance function, cluster analysis, classification, outlier analysis, associative pattern mining are covered. The second part of the course is devoted to the application of this knowledge to such problems as: spatial data mining, stream data mining, graph data mining and social networks analysis.\nLearning outcomes\nThe student:\r\nIs familiar with main notions used in datamining such as Attribute, feature, distance/ similarity function.\r\nUnderstands main problems of the data mining area: clustering, classification, outlier analysis and associative patterns mining.\r\nFamiliar with mathematical foundations of each problem.\r\nIs able to formally state data mining problem.\r\nAble to choose methods to solve given problem.\r\nAble to program the algorithms of most popular methods.\r\nAble to interpret achieved results." . . "Presential"@en . "FALSE" . . "Advanced 3d modelling"@en . . "4" . "LEARNING OUTCOMES OF THE COURSE UNIT\n\n- The student will be apprised with the EPD - Electronic Product Definition - electronic product definition.\n- The student will be able to create more complex shapes in the SolidWorks system, including the production of sheet metal parts of the cabinet type.\n- Student will be able to master animation * .avi format including creation in 3D Photo View.\n- The student will have an overview of the PDM, PLM, ERP systems as a CAD system upgrade.\n- The student will be familiar with the CAM system and with the possibilities of creating the NC code.\n- The student will have an overview of 3D printing options.\n- Student will have an overview of the possibilities of CAE Computer Aided Engineering - output of 3D volume modeling as a basis for mathematical and physical analysis.\n.\nCOURSE CURRICULUM\n\nLectures:\n1. The way of technical preparation of the production before the onset of IT, a significant change coming with the advent of IT (CAD, PDM, PLM, ERP, CAE, CAM systems).\n2. Computer Aided Design (CAD), computer supported projecting, last development, 2D, 3D, explicit and parametric modeling.\n3. The principles of parametric modeling, EPD (Electronic Product Definition).\n4. Drawing documentation in the age of 3D volume modeling.\n5. PDM as an extension of CAD. Product Data Management (PDM) – enterprise data management and management, as a system for managing and managing product data and related work processes: CAD models, drawings, BOMs, parts data, product specifications, NC programs, analysis results, related correspondence, etc., document editing. PDM users: developers, designers, factory workers, project managers, people from sales, marketing, purchasing, finance, who can also contribute to product design.\n6. Product Lifecycle Management (PLM) – Product Lifecycle Management as a product lifecycle management process from the first idea through design, construction and production to servicing and disposal of the product. PLM as the central repository for information: Dealers' sales notes, catalogs, customer responses, marketing plans, archived project plans, and other information gathered over the life of each product.\n7. Enterprise Resource Planning (ERP), Enterprise Resource Planning. Enterprise system for computer management and integration of planning, inventory, purchasing, sales, marketing, finance, human resources, etc.\n8. Computer Aided Engineering (CAE) – engineering analysis, economic benefits, relation to the experimental tests.\n9. Finite element method – basics, using.\n10. Finite volume method – basics, using, Computational fluid dynamics (CFD), calculation of the fluid dynamics.\n11. Computer Aided Manufacturing - computer-controlled production, computer-aided production. Use of specialized programs and equipment to automate the production, assembling and control of products.\n12. Computer Aided Manufacturing CAM. Methodology of NC code creation.\n13. CAD, CAM, CAE, PDM, PLM, ERP, CAM – overviews.\n\nPractices:\nThe 3D volume model creation, more complicated shapes.\nCreating drawn profiles.\nCreation of sheet metal parts - cabinets of switchboards.\nKinematic analysis\nCreating a video presentation\nMaking video presentation in 3D Photo View Studio\nSolid CAM\n3D print issues\nAIMS\n\nThe goal is to apprise the students with the new trend in technical preparation of the production, when the development is taking complexly by deployment of the system from the development to the production. The students deepen knowledge in making more complex 3D volume models. They obtain the overview of the possibility of kinematic analysis, the movie making in the 3D studo environment PhotoView, output on CAM and 3D print." . . "Presential"@en . "FALSE" . . "3d modelling of the built environment"@en . . "5" . "This course provides a detailed description of the main ways in which the built environment is modelled in three dimensions, covering material from low-level data structures for generic 3D data to high-level semantic data models for cities.\n\n \nAt the end of the course, students should be able to:\n\n1. compare different modelling approaches, outline their relative merits and drawbacks, and choose an appropriate approach for a given use case;\n2. interpret topological properties like 2-manifoldness, and execute solutions that use these properties to store 3D models;\n3. implement several different data structures for the storage of 3D models;\n4. outline the characteristics of the main semantic open data models used in GIS (CityGML-CityJSON) and BIM (IFC), and to manipulate such models at a low level;\n5. execute analyses based on a 3D city model and check their result." . . "Presential"@en . "TRUE" . . "Data science concepts"@en . . "6" . "Contents:\nThe amount and variety of data in the domains of living environment, food, health, society and natural resources increases very rapidly. Data thus plays an ever more central role in these areas, and careful processing and analysis can help extract information and infer new knowledge, eventually leading to new insights and a better understanding of the problem at hand. Knowledge of core concepts in data science – acquisition, manipulation, governance, presentation, exploration, analysis and interpretation – and elementary data science skills have become essential for researchers and professionals in most scientific disciplines. This course is an introduction to data science concepts, combining computer science, mathematics and domain expertise: acquiring and manipulating raw data, obtaining information by processing and exploration, and finally reaching understanding by analysis and modelling. This will be complemented by elementary skills in data wrangling, exploration and analysis. The content of the course is strongly embedded in a number of provided domain-specific cases from biology, health and nutrition and the environment, allowing students from many disciplines to appreciate the relevance of data science in their domains.\nLearning outcomes:\nAfter successful completion of this course students are expected to be able to:\n- explain the relevance of data and data science in research and application within their field of study;\n- recognize key concepts as used in data science practice and elaborated in continuation courses;\n- discuss the need for and describe approaches to data acquisition, manipulation, storage, governance, exploration, presentation, analysis and modeling;\n- apply a number of basic techniques for data wrangling, exploration and analysis in use cases related to their field of study, including practicing elementary scripting skills." . . "Presential"@en . "TRUE" . . "Data science for smart environments"@en . . "6" . "Contents:\nNew sources of data available from all kind of ‘smart technologies’ such as sensors, tracking-devices, crowd sourcing and social media open possibilities to create information and gain knowledge about our environment beyond that what is possible with ‘traditional’ sources of data. Especially analyses of spatial-temporal processes and interactions between people and their environment are accelerated by these new sources of data. Examples are the movements of people (tourists) through a city and the consequences for its accessibility or the perception of people about certain places.\nThe drawback is that these data often comes in high volumes, are often ill structured, and often are collected with a different purpose than that of environmental analyses. This means that (pre) processing, analyses, and visualization of such data requires specific skills. This includes, for example skills to create meaningful patterns from the data by applying (spatial) classification and clustering techniques, or applying sentiment and topic analyses techniques on for example social-media data. Knowing how to visualize these often-complex type of data is essential to effectively share and communicate the outcomes of analyses.\nMoreover, making sense of these data and transform it to information useful for design, participation, decision-making and governance processes requires a critical attitude and good knowledge about the quality of the data, as well as critical reflections on the social and political implications of using smart technologies in environmental policy and decision-making. This course will pay ample attention to societal aspects such as citizen engagement in data gathering, ethical questions around big data and automation, and implications of using smart technologies on social and power relation in (urban) environmental policy. \nTo successfully follow this course knowledge about modern data-science concepts and techniques such as treated in Data Science Concepts (INF-xxxxx) or a data science minor is assumed.\nLearning outcomes:\nAfter successful completion of this course students are expected to be able to:\n- understand the specific aspects of applying data-science for the environmental science domains;\n- evaluate the quality and understand the limitations of data-sources from ‘smart technologies’;\n- design procedures to solve an information need using data-science and visualization techniques;\n- extract meaningful patterns/knowledge and synthesize it in an appropriate way such that is can be understood and used within an environmental design or planning process;\n- apply appropriate data visualization techniques to complex environmental data;\n- develop an attitude of responsibility by reflecting on the societal implications of using smart technologies and big data;\n- identify boundaries between practices and develop and demonstrate the competences necessary for crossing these boundaries." . . "Presential"@en . "TRUE" . . "Reproducible data analysis in r"@en . . "3" . "Learning outcomes\nThe student will know how to work with different data formats, she is able to create pleasing, informative publication quality visualizations, fit and visualize linear models, and to analyse her data in an effective and reproducible manner. She will also have basic skills in using the R functions best suited for biological data analysis.\nBrief description of content\n1. file structures, git, RStudio, importing data (1)\n2. base::R - indexing, vector calculation, functions (1)\n3. ggplot (2)\n4. Data munging. dplyr (3)\n5. Regular expressions. stringr (1)\n6. working with datetimes. lubridate (1)\n7. apply/map (1-2)" . . "Presential"@en . "FALSE" . . "Data science for smart environment"@en . . "no data" . "N.A." . . "Presential"@en . "TRUE" . . "Data analysis in physics and astronomy"@en . . "3" . "Statistical frameworks and data analysis. Classical statistical inference. Bayesian statistical\r\ninference. Data mining and searching for structure in point data. Data dimensionality and its\r\nreduction. Regression and model fitting. Data classification. Time series analysis." . . "Presential"@en . "TRUE" . . "Data mining and cloud-based solutions"@en . . "no data" . "no data" . . "Presential"@en . "TRUE" . . "3d modelling in gis"@en . . "no data" . "no data" . . "Presential"@en . "FALSE" . . "Introduction to data science"@en . . "6" . "no data" . . "Presential"@en . "FALSE" . . "Data analysis in environmental and geosciences"@en . . "4" . "The aim of the study course is to provide opportunities for students to learn data analysis methods in environmental and Earth sciences. Details of data mining schemes in field studies, statistical data processing methods, calculation of statistics, random variable distributions, correlation, variance and regression analysis, principal component analysis, species community structure analysis, ordination and classification methods are given for students. The application of the methods is provided in practical work. Course Tasks: 1. To create and strengthen notions about the need for statistical methods in scientific research. 2. To provide knowledge about the main methods of statistical data processing in environmental and Earth sciences. 3. Introduce a software package programms and the possibilities of its use in data processing and interpretation. 4. To provide knowledge about the presentation of the results of statistical data processing in scientific reports and publications. Languages of instruction are Latvian and English.\nCourse responsible lecturer Zaiga Krišjāne\nResults Knowledge 1. Understands data analysis methods and its application possibilities. Skills 2. Apply knowledge of data analysis with statistical program packages. Competence 3. Plans research, collects and analyzes data and interprets the results obtained in practical and fundamental research projects." . . "Presential"@en . "FALSE" . . "Deep neural networks in geodata analysis"@en . . "5" . "Processing photogrammetric, remote sensing, pan\u0002chromatic, multispectral images with use of deep neural networks. Selection of the neural network type and ar\u0002chitecture to solve a specific task. Developing the ability to use specialist software to perform professional digital transformations related to artificial intelligence." . . "Presential"@en . "FALSE" . . "Data analysis"@en . . "3" . "Complements in mathematics; The objective is to provide a certain number of mathematical tools allowing you to calmly approach the different units of the ISIE course\nComputer science, data analysis General notions around digital data (typing, format, precision, sampling, etc.) and metadata\n· Recovery / reading / writing of data files\n· Data representation (1D,2D,3D, nD)\n· Standard statistical analysis tools\n· Concepts of noise and its statistical properties\n· Data regression/interpolation/smoothing\n· Cross-data analysis: Correlation of datasets and classification methods\n· Concepts of frequency analysis (Fourier spectrum and spectrogram)\nThroughout the module different IT tools are introduced. The basic language used is Python but complete mastery of this language is not an objective of the module." . . "Presential"@en . "TRUE" . . "Geodata visualization"@en . . "5" . "no data" . . "Presential"@en . "FALSE" . . "Data acquisition methods"@en . . "7" . "Geodesy and Topometry\nIntroduction to Photogrammetry and Remote Sensing\nPhysics of Remote Sensing and its Applications\nImage processing" . . "Presential"@en . "TRUE" . . "Data visualization"@en . . "6" . "Mapping\nWebmapping\nProgramming under GIS\nDBMS and Concept of Spatial Cartridge" . . "Presential"@en . "TRUE" . . "Data analysis (2 ects)"@en . . "2" . "no data" . . "Presential"@en . "TRUE" . . "Information analysis and extraction"@en . . "10" . "Not found" . . "Presential"@en . "TRUE" . . "Data analysis and modelling"@en . . "12" . "time series analysis and filter processes;\nlinear and non-linear regression analysis;\nmathematical-physical modeling and simulation;\nunderstand numerical solution methods and apply inverse modeling and data inversion;" . . "Presential"@en . "TRUE" . . "Data science in python (md)"@en . . "5.00" . "The key objectives of this module are\n1) to provide students with an initial crash course in Python programming;\n2) to familiarise students with a range of key topics in the emerging field of Data Science through the medium of Python.\nStudents will start by exploring methods for collecting, storing, filtering, and analysing datasets. From there, the module will introduce core concepts from numerical computing, statistics, and machine learning, and demonstrate how these can be applied in practice using popular open source packages and tools. Additional topics that will be covered include data visualisation and working with textual data. This module has a strong practical programming focus and students will be expected to complete two detailed coursework assignments, each involving implementing a Python solution to a data analytics task. COMP47670 requires a reasonable level of mathematical ability, and students should have prior programming experience (but not necessarily in Python).\nThis is a Mixed Delivery module with online lectures and face to face practicals/tutorials\n\nLearning Outcomes:\nOn completion of this module, students will be able to:\n1) Program competently using Python and be familiar with a range of Python packages for data science;\n2) Collect, pre-process and filter datasets;\n3) Apply and evaluate machine learning algorithms in Python;\n4) Visualise and interpret the results of data analysis procedures.." . . "Blended"@en . "FALSE" . . "Physics data analysis (python)"@en . . "5.00" . "The aim is to provide students with a strong grounding in the analysis of experimental Physics data in the Python programming language. The contents will cover the basics of statistics, error analysis and propagation of errors, curve fitting and parameter estimation, chi-squared tests for goodness of fit, Monte Carlo simulations and maximum likelihood methods. Python topics will be intertwined with data analysis topics to build Python skills at the same time. Students will learn from doing examples themselves in-class in an Active Learning Room environment as well as assignments. The error analysis section of the course will pay close attention to the Guide to the expression of Uncertainty in Measurement (G.U.M.) reference document adopted by many scientific organisations and industries.\n\nLearning Outcomes:\nHave an understanding of experimental measurement and uncertainties, including statistical and systematic errors, and to use appropriate precision when quoting uncertainties.\n\nUnderstand the fundamental statistical distributions that apply to physical measurements.\n\nBe able to characterise data through parameters such as the mean, standard deviation, covariance, weighted mean and uncertainties on the weighted mean.\n\nBe able to propagate errors on measurements through functions of those measurements, both analytically and numerically.\n\nBe able to fit a function to a set of experimental data to derive best-fit parameters including the uncertainties on the parameters and to use the best-fit covariance matrix to calculate confidence intervals.\n\nBe able to apply a chi-squared test to assess goodness of fit and f-test to assess whether extra parameters for nested functions significantly improve the fit.\n\nBe able to apply Kolmogorov–Smirnov test and chi-square tests to compare two distributions.\n\nHave an understanding of and be able to apply the Permutation test and Bootstrap/Jackknife tests.\n\nBe apply to apply the Method of Maximum Likelihood, including the Likelihood Ratio Test, for parameter estimation and significance estimation.\n\nBe able to do all of the above in Python using appropriate libraries." . . "Presential"@en . "FALSE" . . "Representing and manipulating data"@en . . "20" . "Knowing how to write scripts is essential to everyone who would work with data. This module introduces you to the use of the programming language Python for manipulating data." . . "Presential"@en . "TRUE" . . "Analysis of environmental data"@en . . "20" . "The ability to analyse complex data is a vital skill in a scientist's toolbox, whether working with experimental or observational data. This module introduces data analysis in the framework of linear modelling using the open-access R software." . . "Presential"@en . "FALSE" . . "Commercial and scientific applications"@en . . "20" . "Data Science is a rapidly emerging discipline at the intersection of computer science, statistics, and application domains. The main goal of data science is to extract knowledge and insight from data, which can then be turned into positive action. \n \n This module introduces the fields of Data Science and Big Data. It covers the fundamental steps of the data science process, as well as some specific techniques and case studies. Guest speakers from industry and academia will appear to discuss their data science applications. \n \n There are no programming labs or hands-on data analysis in this module. This is left to other modules in the programme. The idea is instead to provide a high-level discussion of important principles and study the conceptual steps of the data science process. The module is assessed by a critical essay of a specific data science case study, from a recent journal article, selected by the student from a given preselected set of articles. n successful completion of the module, you should be able to:\n \n have a critical knowledge of the fields of data science and big data, including an understanding of the skills required from a data scientist;\n develop a critical understanding of the key stages of a data science project, and apply this knowledge while critically analysing a real-world data science case study;\n develop a critical awareness of current issues in real-world practical commercial and scientific applications of data science;\n communicate, using writing, oral and visual methods, the development and findings of a real-worlOd successful application of data science." . . "Presential"@en . "FALSE" . . "Data analytics"@en . . "3.0" . "This module introduces computational approaches to process numerical data on a large scale. Computation on arrays of continuous variables underpins machine learning, data analytics, and signal processing. Vectorized operations on numerical arrays, fundamental stochastic and probabilistic methods and scientific visualization. Manipulating continuous data, specifying problems in a form that can be solved numerically, dealing with unreliable and uncertain information, and communicating these results. Operations on vectors and matrices, specifying and solving problems via numerical optimization, time series modelling, scientific visualization and basic probabilistic computation." . . "Presential"@en . "TRUE" . . "Data analysis in astronomy"@en . . "6.0" . "### Teaching language\n\nEnglish \n_Obs.: As aulas serão em português caso todos dominem esta língua._\n\n### Objectives\n\nThe general objective of this lecture course is to familiarize students with some techniques currently used in data analysis in Astronomy. In particular, it is intended that students develop an understanding of the main concepts underpinning the process of scientific inference and become capable of applying them when trying to solve problems in Astronomy.\n\n### Learning outcomes and competences\n\nIt is expected that the student will be able to apply the methods associated with the process of scientific inference to the analysis of data and the resolution of problems in Astronomy.\n\n### Working method\n\nPresencial\n\n### Program\n\n\\- Deductive and inductive inference in the scientific method. \n\\- Parameter estimation and model comparison in Physics and Astronomy: exemplification through the analysis of spectra and detection of sources. \n\\- Analytical fitting of linear physical models in the presence of Gaussian uncertainties. \n\\- Computational fitting of nonlinear physical models. \n\\- Analysis of time series and images. \n\\- Definition of experimental and observational strategies in Physics and Astronomy.\n\n### Mandatory literature\n\nP. C. Gregory; Bayesian Logical Data Analysis for the Physical Sciences, 2005 \nW. von der Linden, V. Dose, U. von Toussaint; Bayesian Probability Theory: Applications in the Physical Sciences, 2014 \n\n### Complementary Bibliography\n\nS. Andreon, B. Weaver; Bayesian Methods for the Physical Sciences, 2015 \nBailer-Jones, C.A.L.; Practical Bayesian Inference: A Primer for Physical Scientists, 2017 \nJ.M. Hilbe, R.S. de Souza and E.E.O. Ishida; Bayesian Models for Astrophysical Data, 2017 \n\n### Teaching methods and learning activities\n\nIn the theoretical-practical classes, the syllabus is explained and its application exemplified. Problems illustrating the concepts presented are also solved, and discussion is promoted in the classroom, contributing to the consolidation of knowledge and the development of a critical mind. In the practical-laboratorial classes, methods and techniques are implemented that can be used in the context of the analysis of data, such as spectra, time series and images, relevant for Physics and Astronomy.\n\n### Evaluation Type\n\nDistributed evaluation with final exam\n\n### Assessment Components\n\nExam: 35,00%\nWritten assignment: 65,00%\n**Total:**: 100,00%\n\n### Amount of time allocated to each course unit\n\nAutonomous study: 106,00 hours\nFrequency of lectures: 56,00 hours\n\n**Total:**: 162,00 hours\n\n### Eligibility for exams\n\nIn the final exam students are required to obtain a minimum classification of 8 in 20.\n\n### Calculation formula of final grade\n\nThe final classification is given by: Nf=0.35\\*Ex+0.35\\*Tr1+0.30\\*Tr2 where Nf is the final classification (cannot be below 10 in a scale of 0 to 20), Ex is the classification in the final exam (cannot be below 8 in a scale of 0 to 20), Tr1 and Tr2 are the overall classifications respectively in the first and second pratical work tasks with written report (between 0 and 20).\n\n### Examinations or Special Assignments\n\nPratical work tasks with required submission of written reports will be given to all students, and their classification will have a weight of 65 per cent towards the final classification.\n\n### Classification improvement\n\nThe improvement of the final classification can be made only by improving the classification in the written exam, that will still have a weigh of 35 percent in the final classification. It will not be possible to improve the classification in the pratical work tasks.\n\nMore information at: https://sigarra.up.pt/fcup/en/ucurr_geral.ficha_uc_view?pv_ocorrencia_id=498806" . . "Presential"@en . "TRUE" . . "Data mining I"@en . . "6.0" . "https://sigarra.up.pt/fcup/en/ucurr_geral.ficha_uc_view?pv_ocorrencia_id=507414" . . "Presential"@en . "FALSE" . . "Data mining II"@en . . "6.0" . "https://sigarra.up.pt/fcup/en/ucurr_geral.ficha_uc_view?pv_ocorrencia_id=507421" . . "Presential"@en . "FALSE" . . "Statistical methods in data mining"@en . . "6.0" . "https://sigarra.up.pt/fcup/en/ucurr_geral.ficha_uc_view?pv_ocorrencia_id=502139" . . "Presential"@en . "FALSE" . . "Big data"@en . . "6" . "understand the principles of knowledge discovery and the methods for data mining;\nuse software framework to design, implement and deploy a solution for big data analytics" . . "Presential"@en . "TRUE" . . "Data mining"@en . . "6.0" . "Data mining is a major frontier field of computer science. It allows extracting useful and interesting patterns and knowledge from large data repositories such as databases and the Web. Data mining integrates techniques from the fields of databases, machine learning, statistics, and artificial intelligence. This course will present the state-of-the-art techniques of data mining. The lectures and labs will emphasize the practical use of the presented techniques and the problems of developing real data-mining applications. A step-by-step introduction to data-mining environments will enable the students to achieve specific skills, autonomy, and hands-on experience. A number of real data sets will be analysed and discussed.\n\nPrerequisites\nNone.\n\nRecommended reading\nPang-Ning, T., Steinbach, M., Karpatne, A., and Kumar, V. (2018). Introduction to Data Mining, 2nd Edition, Pearson, ISBN-10: 0133128903, ISBN-13: 978-0133128901\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/462683/data-mining" . . "Presential"@en . "TRUE" . . "Interactive data vizualization"@en . . "3" . "understand main concepts behind human-computer interaction;\n design effective GUI;\n elaborate visualization strategies to ease understanding of the data." . . "Presential"@en . "TRUE" . . "Building and mining knowledge graphs"@en . . "6.0" . "Knowledge graphs are large-scale, machine-processable representations of entities, their attributes, and their relationships. Knowledge graphs enable both people and machines to explore, understand, and reuse information in a wide variety of applications such as answering questions, finding relevant content, understanding social structures, and making scientific discoveries. However, the sheer size and complexity of these graphs present a formidable challenge particularly when mining across different topic areas.\n\nIn this course, we will examine approaches to construct and use knowledge graphs across a diverse set of applications using cutting-edge technologies such as machine learning and deep learning, graph databases, ontologies and automated reasoning, and other relevant techniques in the area of data mining and knowledge representation.\n\nPrerequisites\nDesired Prior Knowledge: Introduction to Computer Science\n\nRecommended reading\nAggarwal, C.C. and Wang, H. eds., (2010) Managing and mining graph data (Vol. 40). New York: Springer. ISBN 978-1-4419-6045-0\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/464631/building-and-mining-knowledge-graphs" . . "Presential"@en . "FALSE" . . "Information retrieval and text mining"@en . . "6.0" . "Using today’s search engines allows us to find the needle in the haystack much easier than before. But how do you find out what the needle looks like and where the haystack is? That is exactly the problem we will discuss in this course. An important difference with standard information retrieval (search) techniques is that they require a user to know what he or she is looking for, while text mining attempts to discover information that is not known beforehand. This is very relevant, for example, in criminal investigations, legal discovery, (business) intelligence, sentiment- & emotion mining or clinical research. Text mining refers generally to the process of extracting interesting and non-trivial information and knowledge from unstructured text. Text mining encompasses several computer science disciplines with a strong orientation towards artificial intelligence in general, including but not limited to information retrieval (building a search engine), statistical pattern recognition, natural language processing, information extraction and different methods of machine learning (including deep learning), clustering and ultimately integrating it all using advanced data visualization and chatbots to make the search experience easier and better.\n\nIn this course we will also discuss ethical aspect of using Artificial Intelligence for the above tasks, including the need for eXplainable AI (XAI), training deep-learning large language-models more energy efficient, and several ethical problems that may arise related to bias, legal, regulatory and privacy challenges.\n\nThis course is closely related with the course Advanced Natural Language Processing (ANLP). In the ANLP course, the focus is more on advanced methods and architectures to deal with complex natural language tasks such as machine translation, and Q&A systems. IRTM focusses more on building search engines and using text-analytics to improve the search experience. In the IRTM course, we will use a number of the architectures that are discussed in more detail in ANLP. The overlap between the two courses is kept to a minimum. There is no need to follow the courses in a specific order.\n\nPrerequisites\nNone.\n\nRecommended reading\nIntroduction to Information Retrieval. Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze. Cambridge University Press, 2008. In bookstore and online: http://informationretrieval.org.\n\nMore information at: https://curriculum.maastrichtuniversity.nl/meta/464235/information-retrieval-and-text-mining" . . "Presential"@en . "FALSE" . . "Geodata science practical workshop"@en . . "6" . "use their technical skills to solve a real-world challenge\n develop an end-to-end solution, innovate and collaborate" . . "Presential"@en . "FALSE" . . "Data analysis and statistical modelling"@en . . "6.0" . "Prerequisites\nProbability and Statistics.\n\nObjectives\n- Introduction to Applied Statistics and its relevance in Data Science. - Analyze real data using statistical methods to extract relevant information about them and solve practical problems using statistical software. - Know the advantages and limitations of various statistical methodologies to make out the most of them in solving real problems. - Find statistical evidence in the data based on models adjusted to the observations collected. Infer about hypotheses of interest associated with the selected models. - Solve a real problem using the knowledge accumulated in this course: computational project.\n\nProgram\n1. Exploratory data analysis: (i) Introduction to R. (ii) Visualization of different types of data. (iii) Treatment of missing values. (iv) Outlier detection. 2. Dimensionality reduction: principal component analysis. Covariance and correlation matrices. 3. Regression models: Gaussian, Logistic, Poisson. Variable Selection. Diagnostic Techniques. Model validation. Prediction. 4. Modeling independent data versus time dependent data. 5. Resampling methods: Jackknife, bootstrap, permutation testing and cross-validation. 6. Elements of the Bayesian methodology: a priori representation (conjugate and non-informative distributions), inference by the Bayes theorem and applications to real data problems. 7. Classification: Total probability of misclassification, Fisher linear discriminant analysis, Bayes classification rule. Evaluation of the performance of a classification rule.\n\nEvaluation Methodology\nA Test of 1h30m (50%), with a minimum grade of 8.0, and a Computational Project (50%)\n\nCross-Competence Component\nCritical and Innovative Thinking - Project realization involves components of strategic thinking, critical thinking, creativity, and problem-solving strategies without explicit evaluation. Intrapersonal Competencies - Project realization involves components of productivity and time management, stress management, proactivity and initiative, intrinsic motivation and decision making without explicit evaluation. Interpersonal Skills - In assessing the project report, 10% of the rating is given to the form of the reports and 10% of the rating is given to the oral presentation and discussion of the project.\b\n\nLaboratorial Component\nLaboratory work performed with the help of R (or equivalent).\n\nProgramming and Computing Component\nThe laboratory and project work involve R programming. The evaluation percentage in this component is 50%.\n\nMore information at: https://fenix.tecnico.ulisboa.pt/cursos/lerc/disciplina-curricular/845953938490004" . . "Presential"@en . "TRUE" . . "Advanced vizualization methods"@en . . "4" . "understand the relations of advanced visualization methods to associated fields;\n understand the fundaments in advanced visualization methods;\n understand key criteria’s for developing visualization research projects;\n create advanced visualization methods applications using contemporary programming languages and frameworks;" . . "Presential"@en . "FALSE" . . "Introduction to data science"@en . . "20.0" . "https://portal.stir.ac.uk/calendar/calendar.jsp?modCode=CSCU9S2&_gl=1*ctz2u9*_ga*MTY1OTcwNzEyMS4xNjkyMDM2NjY3*_ga_ENJQ0W7S1M*MTY5MjAzNjY2Ny4xLjEuMTY5MjAzOTA0NS4wLjAuMA.." . . "Presential"@en . "FALSE" . . "Scripting for data science (cscu9m3)"@en . . "20.0" . "https://portal.stir.ac.uk/calendar/calendar.jsp?modCode=CSCU9M3&_gl=1*nwt5vt*_ga*MTY1OTcwNzEyMS4xNjkyMDM2NjY3*_ga_ENJQ0W7S1M*MTY5MjAzNjY2Ny4xLjEuMTY5MjAzOTM5NC4wLjAuMA.." . . "Presential"@en . "FALSE" . . "Special topics in earth data analytics"@en . . "6" . "Specialization in Earth Data Analytics and its applications in Geoinformatics and Earth Observation. Upon completion of this course, it is expected that the learner will be able to: (1) critically evaluate methodological and practical advantages and disadvantages of analytical methods discussed in the module, (2) select appropriate methods for solving real-world geographical problems." . . "Presential"@en . "FALSE" .