(HNUH248O) We the (Artificial) People - How AI Has Reshaped Politics

Artificial intelligence (AI) has had profound impacts on the modern political landscape, in the US and abroad. This course encourages the critical evaluation of how AI has impacted political behavior and opened new threats like foreign electoral inference, disinformation, and manipulation through deep-fakes and generative language models. Students will debate ethical, fair, transparent, and accountable AI.

Class Overview

Artificial intelligence (AI) has had profound impacts on the modern political landscape, in the US and abroad. This course encourages the critical evaluation of how AI has impacted political behavior, altered the public’s relationships to politicians (for good or ill), and opened new threats like foreign electoral inference, disinformation, and manipulation through deep-fakes and generative language models. Students will debate ethical, fair, transparent, and accountable AI.

Required Background

No prior background is required, as this class will be taught at the introductory level. Familiarity with the US political system or US civics would be valuable but is not necessary.

Student Learning Outcomes

At the end of this course, students will able to do the following:

  1. Describe the tensions between an innovation society built on strong protections of privacy and intellectual property alongside technology that relies on large amounts of (potentially sensitive or copyrighted) data to create new and innovative material
  2. Explain how AI systems learn from, influence, and impact your information diet, and how these changes can potentially impact political knowledge and behavior
  3. Identify and describe strategies that actors may use to leverage, influence, or exploit AI systems for political, economic, or personal gain
  4. Identify multiple areas where bias in AI systems or their training data can lead to unequal distributions of harm
  5. Describe governance and regulation efforts around artificial intelligence to protect against threats (both real and perceived) to national security, privacy, health, equity, and other areas
  6. Develop multiple arguments to prioritize different sensitive aspects of harm reduction in the context of AI regulation, describing potential regulatory interventions to reduce these harms

Things we won’t teach:

Students should also know that this course will NOT teach:

  • How to code AI algorithms
  • How to estimate data models
  • The mathematical formalisms behind AI or machine learning algorithms
  • Nitty-gritty understanding of the kinds of specific algorithmic classes out there (e.g., model architectures in convolutional neural networks, etc.).

Textbooks

While we mostly will be engaging with readings from academic venues and popular press, we will include readings from the following textbooks:

Course Modules

  1. Module 1 - What is AI and Other Major Questions
  2. Module 2 - AI’s Influence on Information Diets
  3. Module 3 - AI and Political Communication
  4. Module 4 - AI and Political Participation
  5. Module 5 - AI and Regulation
  6. Module 6 - AI, International Relations, and Military Uses of AI

Daily Reading Assignments

Module 1. What is AI and Other Major Questions

Lesson 1. Course Introduction

Reading List: None

Lesson 2: Is AI More Risk Than Reward? (Aug 28)

Reading List:

  • Christodoulou E and Iordanou K (2021) Democracy Under Attack: Challenges of Addressing Ethical Issues of AI and Big Data for More Democratic Digital Media and Societies. Frontiers in Political Science. 3:682945. doi: 10.3389/fpos.2021.682945. Available at https://www.frontiersin.org/journals/political-science/articles/10.3389/fpos.2021.682945/full
  • Chapter 3, “Why Does [AI] Matter?”, of Artificial intelligence: How does it work, why does it matter, and what can we do about it? https://www.europarl.europa.eu/stoa/en/document/EPRS_STU(2020)641547
  • Kathy Baxter and Yoav Schlesinger, Managing the Risks of Generative AI. Harvard Business Review. 6 June 2023. Available at: https://hbr.org/2023/06/managing-the-risks-of-generative-ai

Lesson 3: Fairness and Equity (Sept 4)

Reading List:

  • Chouldechova, A., Benavides-Prado, D., Fialko, O. & Vaithianathan, R. (2018). A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, in Proceedings of Machine Learning Research. 81:134-148. Available from https://proceedings.mlr.press/v81/chouldechova18a.html
  • J.D. Zamfirescu-Pereira, Jerry Chen, Emily Wen, Allison Koenecke, Nikhil Garg, and Emma Pierson. 2022. Trucks Don’t Mean Trump: Diagnosing Human Error in Image Analysis. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ‘22). Association for Computing Machinery, New York, NY, USA, 799-813. https://doi.org/10.1145/3531146.3533145
  • Buijsman, Stefan. “Navigating fairness measures and trade-offs.” AI and Ethics (2023): 1-12. https://doi.org/10.1007/s43681-023-00318-0.

Lesson 4: Public Trust in AI (Sept 9)

Reading List:

  • “Building Trust in AI” IBM, https://www.nytimes.com/paidpost/ibm/building-trust-in-ai.html
  • “Trust in AI: A Five Country Study” KPMG, https://assets.kpmg.com/content/dam/kpmg/au/pdf/2021/trust-in-ai-multiple-countries.pdf
  • Shuai Ma, Ying Lei, Xinru Wang, Chengbo Zheng, Chuhan Shi, Ming Yin, and Xiaojuan Ma. 2023. Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ‘23). Association for Computing Machinery, New York, NY, USA, Article 759, 1-19. https://doi.org/10.1145/3544548.3581058

Reading List:

  • Thomas Burri, “Machine Learning and the Law: Five Theses” https://perma.cc/C64Z-JJMD
  • David C. Vladeck. “Machines without Principals (sic): Liability Rules and Artificial Intelligence”. (Washington Law Review, 2014), https://perma.cc/EJ5M-YMCJ
  • Heather Knight. (2024). San Francisco Moves to Lead Fight Against Deepfake Nudes. The New York Times. 15 August 2024. https://www.nytimes.com/2024/08/15/us/deepfake-pornography-lawsuit-san-francisco.html

Lesson 6: How/Whether to do AI Regulation? (Sept 16)

Reading List:

  • Recommendations for Regulating AI, by Google (!), https://ai.google/static/documents/recommendations-for-regulating-ai.pdf
  • AI Executive Orders from President Biden and from Maryland Governor Wes Moore
    • https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
    • https://governor.maryland.gov/Lists/ExecutiveOrders/Attachments/31/EO%2001.01.2024.02%20Catalyzing%20the%20Responsible%20and%20Productive%20Use%20of%20Artificial%20Intelligence%20in%20Maryland%20State%20Government_Accessible.pdf
  • EU’s AI Act, High-Level Summary https://artificialintelligenceact.eu/high-level-summary/

Module 2. AI’s Influence on Information Diets

Lesson 7: What are Recommendation systems (Sept 18)

Reading List:

  • Larry Hardesty. (2019). The history of Amazon’s recommendation algorithm: Collaborative filtering and beyond. November 22, 2019. https://www.amazon.science/the-history-of-amazons-recommendation-algorithm
  • Netflix Recommendations: Beyond the 5 Stars, Parts 1 and 2.
    • https://netflixtechblog.com/netflix-recommendations-beyond-the-5-stars-part-1-55838468f429
    • https://netflixtechblog.com/netflix-recommendations-beyond-the-5-stars-part-2-d9b96aa399f5
  • Cristos Goodrow. (2021). On YouTube’s recommendation system. https://blog.youtube/inside-youtube/on-youtubes-recommendation-system/

Lesson 8: Rabbit Holes and YouTube (Sept 23)

Reading List:

  • Zeynep Tufekci. (2018). YouTube, the Great Radicalizer. March 10, 2018. https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html
  • Megan A. Brown, Jonathan Nagler, James Bisbee, Angela Lai, and Joshua A. Tucker. Echo chambers, rabbit holes, and ideological bias: How YouTube recommends content to real users. https://www.brookings.edu/articles/echo-chambers-rabbit-holes-and-ideological-bias-how-youtube-recommends-content-to-real-users/
  • Liu, N., Baum, M. A., Berinsky, A. J., Chaney, A. J., de Benedictis-Kessner, J., Guess, A., … & Stewart, B. M. (2023). Algorithmic recommendations have limited effects on polarization: A naturalistic experiment on YouTube. September, 18, 398-404. https://dcknox.github.io/files/LiuEtAl_AlgoRecsLimitedPolarizationYouTube.pdf

Guest Speaker: Megan Brown, University of Michigan

Lesson 9: Aggressive Curation, TikTok (Sept 25)

Reading List:

  • Zeve Sanderson, Solomon Messing, and Joshua A. Tucker. (2024) Misunderstood mechanics: How AI, TikTok, and the liar’s dividend might affect the 2024 elections. January 22, 2024. https://www.brookings.edu/articles/misunderstood-mechanics-how-ai-tiktok-and-the-liars-dividend-might-affect-the-2024-elections/
  • Ruben den Boer, Lynn de Munnik. (2023). War from the rabbit hole: the media literacies landscape of TikTok during the Ukraine conflict. February 24, 2023. https://www.diggitmagazine.com/articles/war-rabbit-hole-media-literacies-landscape-tiktok-during-ukraine-conflict
  • Nadia Karizat, Dan Delmonaco, Motahhare Eslami, and Nazanin Andalibi. 2021. Algorithmic Folk Theories and Identity: How TikTok Users Co-Produce Knowledge of Identity and Engage in Algorithmic Resistance. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 305 (October 2021), 44 pages. https://doi.org/10.1145/3476046

Lesson 10: AI and Social Media (Sept 30)

Reading List:

  • The Web Foundation. (2018). The Invisible Curation Of Content: Facebook’s News Feed and our Information Diets. April 2018. https://webfoundation.org/docs/2018/04/WF_InvisibleCurationContent_Screen_AW.pdf
  • Fouquaert, T., & Mechant, P. (2021). Making curation algorithms apparent: a case study of “Instawareness” as a means to heighten awareness and understanding of Instagram’s algorithm. Information, Communication & Society, 25(12), 1769-1789. https://doi.org/10.1080/1369118X.2021.1883707
  • Kim, K., & Moon, S.-I. (2021). When Algorithmic Transparency Failed: Controversies Over Algorithm-Driven Content Curation in the South Korean Digital Environment. American Behavioral Scientist, 65(6), 847-862. https://doi.org/10.1177/0002764221989783

Lesson 11: Impacts of Digital Advertising (Oct 2)

Reading List:

  • https://www.nytimes.com/2024/02/01/business/media/artificial-intelligence-product-placement.html
  • https://www.nytimes.com/2023/06/27/magazine/ai-ads-commercials.html
  • https://www.nytimes.com/2023/07/18/business/media/ai-advertising.html

Module 3. AI and Political Communication

Lesson 12: Political Advertising (Oct 9)

Reading List:

  • Political Advertising on Facebook During the 2022 Hungarian Parliamentary Elections. https://dq4n3btxmr8c9.cloudfront.net/files/fs3mhp/Political_Advertising_on_FB_HU2022.pdf
  • Fulgoni, Gian M., Andrew Lipsman, and Carol Davidsen. “The power of political advertising: Lessons for practitioners: How data analytics, social media, and creative strategies shape US presidential election campaigns.” Journal of Advertising Research 56.3 (2016): 239-244. PDF available here.
  • Andrew Prokop. (2018). Cambridge Analytica shutting down: the firm’s many scandals, explained. May 2, 2018. https://www.vox.com/policy-and-politics/2018/3/21/17141428/cambridge-analytica-trump-russia-mueller

Lesson 13: How AI Influences Politicians’ Discussion (Oct 14)

Reading List:

  • S. Rathje, J. J. V. Bavel, and S. van der Linden. Out-group animosity drives engagement on social media. Proceedings of the National Academy of Sciences, 118(26):e2024292118, 2021. https://www.pnas.org/doi/full/10.1073/pnas.2024292118
  • Bui, T. H. (2016). The Influence of Social Media in Vietnam’s Elite Politics. Journal of Current Southeast Asian Affairs, 35(2), 89-111. https://doi.org/10.1177/186810341603500204
  • S. Hong. Who benefits from twitter? social media and political competition in the u.s. house of repre- sentatives. Government Information Quarterly, 30(4):464-472, 2013. https://www.sciencedirect.com/science/article/pii/S0740624X13000646

Lesson 14: Machine Bias & Algorithmic Accountability (Oct 16)

Reading List:

  • Husza¡r, F. et al. Algorithmic amplification of politics on Twitter. Proc National Acad Sci 119, e2025334119 (2022). https://www.pnas.org/doi/10.1073/pnas.2025334119
  • Sandra Gonzalez-Bailon, Valeria d’Andrea, Deen Freelon, Manlio De Domenico, The advantage of the right in social media news sharing, PNAS Nexus, Volume 1, Issue 3, July 2022, pgac137, https://doi.org/10.1093/pnasnexus/pgac137
  • Hazem Ibrahim, Nouar AlDahoul, Sangjin Lee, Talal Rahwan, Yasir Zaki, YouTube’s recommendation algorithm is left-leaning in the United States, PNAS Nexus, Volume 2, Issue 8, August 2023, pgad264, https://doi.org/10.1093/pnasnexus/pgad264

Lesson 15: Trust and DeepFakes (Oct 21)

Reading List:

  • Todd C. Helmus. (2022). Artificial Intelligence, Deepfakes, and Disinformation: A Primer. RAND Corp. Technical Report. 6 July 2022. https://www.rand.org/pubs/perspectives/PEA1043-1.html
  • https://www.rand.org/pubs/commentary/2023/12/deepfakes-arent-the-disinformation-threat-theyre-made.html
  • Walker, C. P., Schiff, D. S., & Schiff, K. J. (2024). Merging AI Incidents Research with Political Misinformation Research: Introducing the Political Deepfakes Incidents Database. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23053-23058. https://doi.org/10.1609/aaai.v38i21.30349

Module 4. AI and Political Participation

Lesson 16: Informedness, Mobilization, and other Impacts (Oct 23)

Reading List:

  • Andrew M. Guess et al. , How do social media feed algorithms affect attitudes and behavior in an election campaign? Science 381, 398-404 (2023). DOI:10.1126/science.abp9364 https://www.science.org/doi/10.1126/science.abp9364
  • Casas, A. & Williams, N. W. Images that Matter: Online Protests and the Mobilizing Role of Pictures. Polit Res Quart 72, 360-375 (2019). https://journals.sagepub.com/doi/abs/10.1177/1065912918786805
  • H. Allcott, M. Gentzkow, W. Mason, A. Wilkins, P. Barber ́a, T. Brown, J. C. Cisneros, A. Crespo- Tenorio, D. Dimmery, D. Freelon, S. Gonz ́alez-Bail ́on, A. M. Guess, Y. M. Kim, D. Lazer, N. Malhotra, D. Moehler, S. Nair-Desai, H. N. E. Barj, B. Nyhan, A. C. P. de Queiroz, J. Pan, J. Settle, E. Thorson, R. Tromble, C. V. Rivera, B. Wittenbrink, M. Wojcieszak, S. Zahedian, A. Franco, C. K. de Jonge, N. J. Stroud, and J. A. Tucker. The effects of facebook and instagram on the 2020 election: A deactivation experiment. Proceedings of the National Academy of Sciences, 121(21):e2321584121, 2024. https://www.pnas.org/doi/10.1073/pnas.2321584121

Lesson 17: Polarization (Oct 28)

Reading List:

  • Asimovic, N., Nagler, J., Bonneau, R. & Tucker, J. A. Testing the effects of Facebook usage in an ethnically polarized setting. Proc. Natl. Acad. Sci. 118, e2022819118 (2021). https://www.pnas.org/doi/10.1073/pnas.2022819118
  • Tornberg, P. How digital media drive affective polarization through partisan sorting. Proceedings of the National Academy of Sciences 119, e2207159119 (2022). https://www.pnas.org/doi/10.1073/pnas.2207159119
  • Nyhan, B., Settle, J., Thorson, E. et al. Like-minded sources on Facebook are prevalent but not polarizing. Nature 620, 137-144 (2023). https://doi.org/10.1038/s41586-023-06297-w

Module 5. AI and Regulation

Lesson 18: Economic Concerns (Nov 4)

Reading List:

  • Acemoglu, Daron, and Pascual Restrepo. (2016). “Robots and Jobs: Evidence from US Labor Markets. https://www.journals.uchicago.edu/doi/abs/10.1086/705716
  • Orchard, Tim, and Leszek Tasiemski. “The rise of generative AI and possible effects on the economy.” Economics and Business Review 9.2 (2023): 9-26. https://intapi.sciendo.com/pdf/10.18559/ebr.2023.2.732
  • David Rotman, “How technology is destroying jobs”, Technology Review, http://www.shellpoint.info/InquiringMinds/uploads/Archive/uploads/20130802_How_Technology_is_Destroying_Jobs.pdf

Lesson 19: Economic Upsides (Nov 11)

Reading List:

  • Erik Brynjolfsson and Andrew McAfee, Human Work in the Robotic Future: Policy for the Age of Automation,’ Foreign Affairs, July/August 2016 https://www.foreignaffairs.com/articles/2016-06-13/human-work-robotic-future (PDF)
  • Mike Thomas, “Robots and AI Taking over Jobs: What to Know about the Future of Jobs,” https://builtin.com/artificial-intelligence/ai-replacing-jobs-creating-jobs
  • Philip Trammell, Anton Korinek. Economic Growth under Transformative AI. National Bureau of Economic Research. October 2023. https://www.nber.org/system/files/working_papers/w31815/w31815.pdf

Lesson 20: Rights for AI and Robots? (Nov 13)

Reading List:

  • Gordon, JS., Pasvenskiene, A. Human rights for robots? A literature review. AI Ethics 1, 579-591 (2021). https://doi.org/10.1007/s43681-021-00050-7
  • Bennett, Belinda, and Angela Daly. “Recognising rights for robots: Can we? Will we? Should we?.” Law, Innovation and Technology 12.1 (2020): 60-80. https://www.dropbox.com/scl/fi/w7qpruh2mzuwass5hksu6/12LawInnovationTech60.pdf?rlkey=opisg82naavjf0h728vkkg4a6&dl=0
  • Gabriel Lima, Changyeon Kim, Seungho Ryu, Chihyung Jeon, and Meeyoung Cha. 2020. Collecting the Public Perception of AI and Robot Rights. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 135 (October 2020), 24 pages. https://doi.org/10.1145/3415206

Lesson 21: Cyber Security, AI, and the Policy Process (Nov 18)

Reading List:

  • The 2023 National Cybersecurity Strategy, https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf
  • Herb Lin, Governance of Information Technology and Cyber Weapons, Chapter 3, https://www.amacad.org/sites/default/files/publication/downloads/GNF_Dual-Use-Technology.pdf
  • Elsa Kania. (2020). “AI Weapons” in China’s Military Innovation. Brookings Institute. April 2020. https://www.brookings.edu/wp-content/uploads/2020/04/FP_20200427_ai_weapons_kania_v2.pdf

Lesson 22: AI and Wearables (Nov 20)

Reading List:

  • Anna Sui et al, Ethical Considerations for the use of consumer wearables in health research. Digital Health, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9900157/
  • Alessandra Angellucci et al, “The paradox of the artificial intelligence system development process: the use case of corporate wellness programs using smart wearables https://link.springer.com/article/10.1007/s00146-022-01562-4
  • Consumer consent, privacy, and ethics of wearables, https://www.ignitec.com/insights/consumer-consent-privacy-and-ethics-of-wearables/

    Lesson 23: The State of Regulation (Nov 25)

Reading List:

  • https://www.nytimes.com/2024/06/10/technology/california-ai-regulation.html
  • https://www.washingtonpost.com/technology/2024/06/06/ai-election-2024-us-misinformation-regulation/
  • https://www.washingtonpost.com/technology/2024/05/15/congress-ai-road-map-regulation-schumer

Module 6. AI, IR, and the Military

Lesson 24: The Global Politics of AI (Dec 2)

Reading List:

  • Paul Scharre, Debunking the AI Arms Race Theory Texas National Security Review, Summer 2021, https://tnsr.org/2021/06/debunking-the-ai-arms-race-theory/
  • Jessica Brandt, Sarah Kreps, Chris Meserole, Pavneet Singh, and Melanie Sisson, Succeeding in the AI competition with China: A strategy for action, Brookings Institution, September 2022, available at https://www.brookings.edu/research/succeeding-in-the-ai-competition-with-china-a-strategy-for-action/
  • Pavluk, Joshua & August Cole (2016) “From Strategy to Execution: Accelerating the Third Offset http://warontherocks.com/2016/06/from-strategy-to-execution-accelerating-the-third-offset/

Lesson 25: Militarization of AI (Dec 4)

Reading List:

  • Markoff, John (2016). Pentagon Turns to Silicon Valley for Edge in Artificial Intelligence New York Times, http://www.nytimes.com/2016/05/12/technology/artificial-intelligence-as-the-pentagons-latest-weapon.html
  • Kolton, Michael (2016), The Inevitable Militarization of Artificial Intelligence’ The Cyber Defense Review.’ http://www.cyberdefensereview.org/2016/02/08/the-inevitable-militarization-of-artificial-intelligence/
  • Sujai Shivakumar, Charles Wessner. (2022). Semiconductors and National Defense: What Are the Stakes? Center for Strategic and International Studies. 8 June 2022. https://www.csis.org/analysis/semiconductors-and-national-defense-what-are-stakes

Grade Distribution

Grades for this class are broken down as follows:

  • In-Class Presentations and Panels: Students will give around five in-class presentations, outlining one of the assigned readings, highlighting what they feel are the most important aspects of the reading. Students will then sit on a panel with other students presenting that day and answer questions from the instructor and student audience. – 30%
  • Debates Students will engage in critical thinking and effective communication by debating a controversial topic. These debates will happen twice during the semester. These exercises will help students learn to construct coherent arguments, respond to opposing viewpoints, and understand the perspectives of various stakeholders – 30%
  • Final Project: Students will develop and demonstrate their abilities to apply course concepts to contemporary issues by writing a persuasive op-ed piece. This project aims to enhance critical thinking, argumentation, and writing skills, while encouraging students to engage with the public discourse on relevant topics. – 30%
  • Class Participation: Asking questions, participating in discussion – 10%

Letter Grade Cutoffs

  • A+ 97-100*, A 93-96.99, A- 90-92.99
  • B+ 87-89.99, B 83-86.99, B- 80-82.99
  • C+ 77-79.99, C 73-76.99, C- 70-72.99
  • D+ 67-69.99, D 63-66.99, D- 60-62.99

Note: To receive an A+ you must have demonstrated significant contributions to the class in addition to achieving this numeric grade. We reserve the right to curve grades upward (but will not curve grades downward).

Policy on Generative AI

The use of generative AI tools such as ChatGPT is allowed for all assignments in this class. However, a central goal of the class is to help you become independent and critical thinkers, so we discourage you from the extensive use of generative AI tools as a substitute for your developing your own opinions and ideas. If you do use AI-generated content in your assignments, you must clearly indicate what work is yours and what part is AI-generated through proper attribution. We also ask you provide a short one-paragraph summary at the end of the assignment on how you used AI tools. Please consult this APA postLinks to an external site. on how to cite AI tools. Failure to do so will be considered plagiarism according to UMD’s Academic Integrity policiesLinks to an external site..Links to an external site.

Syllabus Change Policy

Once the semester begins, this syllabus will be revised infrequently, and any revised requirements will be posted as announcements and updated course schedule to ELMS. The instructor reserves the right to make changes to the course’s schedule, evaluation criteria, policies, etc. through announcements in class and on ELMS, so please check ELMS regularly. Students should email the instructor if there are any discrepancies or questions.

Campus Policies

It is our shared responsibility to know and abide by the University of Maryland’s policies that relate to all courses, which include topics like:

Academic integrity Student and instructor conduct Accessibility and accommodations Attendance and excused absences Grades and appeals Copyright and intellectual property Please visit go.umd.edu/ug-policy for the Office of Undergraduate Studies’ full list of campus-wide policies and follow up with me if you have questions.

Additional Accommodation Policy

I understand the difficulty and additional constraints you may be facing during this time. I am willing to work with you to discuss possible accommodation and alternative arrangement. Please do not hesitate to contact me when needed.

Accessibility and Learning Support

Students with disabilities should inform me of their needs at the beginning of the semester. Please also contact the Accessibility and Disability Support Office (http://www.counseling.umd.edu/ADS/Links to an external site.)Links to an external site.. ADS will make arrangements with the student and me to determine and implement appropriate academic accommodations. Inclusion is one of the iSchool’s core values, and we have attempted to make all materials and assignments accessible to people with varying abilities. However, if there is something else I can do to make the class more accessible, please schedule a time to come talk to me. These improvements will benefit not only yourself but also future students!

Get Some Help!

Taking personal responsibility for your own learning means acknowledging when your performance does not match your goals and doing something about it. I hope you will come talk to me so that I can help you find the right approach to success in this course, and I encourage you to visit tutoring.umd.edu to learn more about the wide range of campus resources available to you.

In particular, everyone can use some help in sharpening their communication skills (and improving their grade) by visiting ter.ps/writing and scheduling an appointment with the campus Writing Center.

You should also know there are a wide range of resources to support you with whatever you might need (see go.umd.edu/assistance). If you just need someone with whom you can talk, visit counseling. umd.edu or one of the many other resources on campus. Most services are free because you have already paid for them, and everyone needs help… all you have to do is be brave enough to ask for it.