Module 2 Readings and links
Readings and links
Alenichev, A., Kingori, P., & Grietens, K. P. (2023). Reflections before the storm: the AI reproduction of biased imagery in global health visuals. The Lancet Global Health, 11(10), e1496-e1498. https://doi.org/10.1016/S2214-109X(23)00329-7
Bashir, N., Donti, P., Cuff, J., Sroka, S., Ilic, M., Sze, V., Delimitrou, C., & Olivetti, E. (2024). The Climate and Sustainability Implications of Generative AI. https://mit-genai.pubpub.org/pub/8ulgrckc/release/2
Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., & Mariman, R. (2024). Generative AI Can Harm Learning. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4895486&s=03
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ? Paper presented at the Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, Canada. https://doi.org/10.1145/3442188.3445922
Dell’Acqua, F. (2021). Falling asleep at the wheel: Human/AI collaboration in a field experiment on HR recruiters.
Edwards, B. (2023). Why AI detectors think the US Constitution was written by AI. Arstechnica.
Ekin, A. (2019). AI can help us fight climate change. But it has an energy problem, too. Horizon: The EU Research and Innovation Magazine.
Garcia, D. (2023). Influence of Facebook algorithms on political polarization tested. doi:d41586-023-02325-x
Hinds, J., Williams, E. J., & Joinson, A. N. (2020). “It wouldn't happen to me”: Privacy concerns and perspectives following the Cambridge Analytica scandal. International Journal of Human-Computer Studies, 143, 102498. doi:https://doi.org/10.1016/j.ijhcs.2020.102498
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7). doi:10.1016/j.patter.2023.100779
OECD. (2022). Measuring the environmental impacts of artificial intelligence compute and applications. Retrieved from Paris: https://www.oecd-ilibrary.org/content/paper/7babf571-en
Rozado, D. (2023). The Political Biases of ChatGPT. Social Sciences, 12(3), 148.
Rumsfeld, D. (2002). Defense.gov News Transcript: DoD News Briefing – Secretary Rumsfeld and Gen. Myers, United States Department of Defense (defense.gov) [Press release]. Retrieved from https://archive.ph/20180320091111/http://archive.defense.gov/Transcripts/Transcript.aspx?TranscriptID=2636
Rutinowski, J., Franke, S., Endendyk, J., Dormuth, I., Roidl, M., & Pauly, M. (2024). The Self-Perception and Political Biases of ChatGPT. Human Behavior and Emerging Technologies, 2024(1), 7115633. https://doi.org/https://doi.org/10.1155/2024/7115633
Stokel-Walker, C. (2023, 01/08/2023). Turns out there’s another problem with AI – its environmental toll. The Guardian. Retrieved from https://www.theguardian.com/technology/2023/aug/01/techscape-environment-cost-ai-artificial-intelligence
Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., . . . Waddington, L. (2023). Testing of Detection Tools for AI-Generated Text. arXiv preprint arXiv:2306.15666.