Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

O.P. Jindal Global University

Introduction to Data Science (Public Policy)

O.P. Jindal Global University via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Data is everywhere. From historical documents to literature and poems, diaries to political speeches, government documents, emails, text messages, social media, images, maps, cell phones, wearable sensors, parking meters, credit card transactions, Zoom, surveillance cameras. Combined with rapidly expanding computational power and increasingly sophisticated algorithms, we have an explosion of digital data around us. Privacy, ethics, surveillance, bias, discrimination are some of the obvious policy issues emanating from these data sources. But there is also incredible potential for better understanding the social world, and the potential to use data for good.In this course we will explore how data and digital material can be leveraged to have a better understanding of social issues. We will devote a substantial component of the course to explore the technical skills necessary to access and analyze data (aka programming in Python!), and best practices re: research design, and the practical knowledge we and others can produce using digital data and methods. By the end of the course you should be able to: 1. Know enough Python basics to qualify as, at a minimum, a novice programmer 2. List different types of digital data (e.g., delimited separated files, raw text, json), be able towrite Python code to input and process each type, and explain how and why you might use each data type in research 3. Write Python code to collect and structure digitized data, including from APIs, process the data, and produce visualizations and/or output to explore or analyze the data 4. Explain what the output from computational methods means, and derive a few insights about the social world from the output and visualizations 5. Feel comfortable learning new techniques and new Python libraries on your own

Syllabus

  • Variables, Expressions, Statements, Conditionals
    • Data is everywhere. From historical documents to literature and poems, diaries to political speeches, government documents, emails, text messages, social media, images, maps, cell phones, wearable sensors, parking meters, credit card transactions, Zoom, surveillance cameras. Combined with rapidly expanding computational power and increasingly sophisticated algorithms, we have an explosion of digital data around us. Privacy, ethics, surveillance, bias, discrimination are some of the obvious policy issues emanating from these data sources. But there is also incredible potential for better understanding the social world, and the potential to use data for good.In this course we will explore how data and digital material can be leveraged to have a better understanding of social issues. We will devote a substantial component of the course to explore the technical skills necessary to access and analyze data (aka programming in Python!), and best practices re: research design, and the practical knowledge we and others can produce using digital data and methods.In this module, we will introduce Python programming using Jupyter Notebook, accessible via Anaconda or Google Colab. It begins with setting up the environment and executing Python code. Learners will explore fundamental concepts such as printing values, identifying variable types, and working with different data types. The module covers statements, expressions, and operators, including arithmetic, comparison, and assignment operators. There will be a dedicated section on strings introduces string operations and manipulation. Logical and Boolean expressions, along with conditional statements (if, else, elif), will also be explored to understand decision-making in Python, including nested and chained conditionals. Additionally, user input handling will also be covered to enable interactive programming. The module concludes with an introduction to Markdown, helping learners document their work effectively in Jupyter Notebook.
  • Functions, Strings, Lists and Iterations
    • The second module explores key programming concepts, beginning with built-in and user-defined functions to enhance code reusability and efficiency. It covers string methods, including splitting strings for text manipulation. Learners will also delve into list methods such as slicing, using the in operator for membership testing, and joining lists. Iterations, including loops, are introduced to automate repetitive tasks, followed by combining loops and conditionals to create dynamic and logical programs. The module concludes with practice exercises to reinforce these concepts and improve problem-solving skills.
  • Iterations, While and for Looping
    • The third module focuses on the concepts of iterations, while loop and for loop in greater detail. We will specifically learn how to update variables, how to write while loops, execute infinite while loops and finishing iterations using “continue” statement. We will also look at writing definite loops using for statements. We will learn counting and summing iteratively going through loops. We will learn how to find out maximum and minimum elements, typically in a list, using loops. We will further go through iterating through lists and learn how to do debugging which is important as you do more advance programming.
  • Introduction to Data Exploration and Statistics
    • The fourth module focuses on handling and analyzing data efficiently. It begins with understanding relative file structures for accessing and organizing files. Learners will explore Pandas DataFrames, a powerful data structure for managing datasets, along with slicing techniques to extract specific data. The module covers summary statistics to describe datasets and methods for comparing differences between means. Visualization techniques using Matplotlib and Seaborn will be introduced, including histograms, scatterplots, and barplots for effective data representation. Finally, practice exercises will reinforce these concepts, enabling learners to apply data analysis and visualization techniques effectively.
  • Introduction to Data Visualizations, Text Analysis and Dictionaries
    • The fifth module delves into essential data structures and text processing techniques. It begins with tuples and dictionaries, exploring their properties and use cases. Learners will then cover list and dictionary comprehension, which provide efficient ways to create and manipulate data structures. The module introduces fundamental text analysis concepts, including counting words, calculating the type-token ratio, and analyzing word frequencies. Next, it covers tokenizing text and preprocessing, essential steps for cleaning and structuring textual data. Additionally, learners will practice reading text files to extract and analyze information. The module concludes with practice exercises to reinforce these concepts through hands-on experience.
  • Introduction to Natural Language Processing using NLTK
    • The sixth module delves into essential data structures and text processing techniques. It begins with tuples and dictionaries, exploring their properties and use cases. Learners will then cover list and dictionary comprehension, which provide efficient ways to create and manipulate data structures. The module introduces fundamental text analysis concepts, including counting words, calculating the type-token ratio, and analyzing word frequencies. Next, it covers tokenizing text and preprocessing, essential steps for cleaning and structuring textual data. Additionally, learners will practice reading text files to extract and analyze information. The module concludes with practice exercises to reinforce these concepts through hands-on experience.
  • APIs and JSON
    • The seventh and final module introduces accessing and extracting data from the web. It begins with accessing databases via Web APIs, followed by constructing API GET requests to retrieve data. Learners will then explore parsing response texts and JSON files to extract meaningful information, such as counting the number of articles. The module also covers web scraping using BeautifulSoup, enabling automated data extraction from websites.

Taught by

Sushant Kumar

Tags

Reviews

Start your review of Introduction to Data Science (Public Policy)

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.