Seminarium 2: AI models in psychology
The goal is to search and read literature, make presentations of AI models in psychology and also hand in a short paper. Below you find the steps that you need to do
1. Create work groups. Divide yourself into groups of 2 to 3 students. List the names of the group in the Course Sheet.
2. Choice an AI model that you want your group wants to present. The AI model should describe a psychological phenomenon and must be presented in one, or a few, articles. Do not choice a model, or articles that any other group has chosen. Write the name of the model for your group in the Course Sheet. In the end of this there is list of possible article that you can choose, or be inspired from, but you can select any relevant article from the literature.
3. Search the literature for related articles. Each student needs to find and read one additional article. Add link to this article in the Google Sheet.
4. Prepare a presentation your model. Write the presentation in the Google Presentation and share a link in the Course Sheet. The presentation should include.
a. Title page including: Title of the presentation and group members
b. An overview of what the model can do
c. Show or demonstrate a simulation of the model
d. A conceptual description of how the model can do what it does.
e. The inputs and the output of the model
f. Describe the psychological phenomena that the model can simulate
g. Describe limitations of the model (i.e., technical limitations or psychological phenomena that it can not account for)
h. Applications of the model
i. Conclusion
4. Write a short paper on our presentation. The paper should be 3 to 5 pages long. Write the paper in the Google Docs and share a link in the Google docs document. The paper should include the same headings as the presentation. The paper should be completed at noon the day before the presentation so that opponent groups have time to critically read it.
5. Prepare opposition of another group. Group 1 will be opponent on group 2, Group 2 will be opponents on group 3, etc. The last groups will be opponent on the first group.
6. Presentation and oppositions. Each group presents their work for 15 minutes. The opponent's discusses the presentation for 5 minuts. The rest of the audience poses questions to groups for 5 minutes.
The aims of the seminar are:
- Reflect about the applications and implications of AI in psychological research
- Train on reading, summarizing and presenting scientific literature.
Things to think about
- Remember that the other groups will not have read what you have read. Therefore, explain things on a very basic level.
- It is your responsibility to make your presentation as clear as possible so that the rest of the class can participate and be included in the discussion about it.
- There is a limited amount of time. Therefore, use it wisely and focus on the most relevant things.
- The seminar is mandatory (presence and active participation).
- The seminar is not a test, but show up prepared!
- It is your responsibility to fill your 20-25 minutes with meaningful content!
- It is possible to fail the seminar.
- Stick to the subject (AI models and psychology).
- We expect this to stimulate the discussion between the students!
The articles may have complex methodologies. It is not expected that you understand everything and all the different sections of the article. Focus on what you think is more interesting and bring your own ideas about the article. Start with an introduction of the theme and explain the main question of the article. After, focus on the most relevant sections and the aspects you thought were most interesting. You can include in your presentation figures and other ideas that you think are helpful for the discussion.
Tentative literature
This is a tentative literature list. Students are expected to add relevant articles to the seminars.
Basic neural network models
Gurney, K. (1997). An introduction to neural networks. https://www.inf.ed.ac.uk/teaching/courses/nlu/assets/reading/Gurney_et_al.pdf
Georgevici, A.I., Terblanche, M. Neural networks and deep learning: a brief introduction. Intensive Care Med 45, 712–714 (2019). https://doi.org/10.1007/s00134-019-05537-w
Sikström, S. & Jönsson, F. (2005). A model for stochastic drift in memory strength to account for judgments of learning. Psychological Review, 112(4), 932-950.
Sikström, S. (2004). The variance reaction time model. Cognitive Psychology, 48 (4), 371-421.
Li, S.- C., Lindenberger, U., Sikström, S. (2001). Aging cognition: from neuromodulation to representation to cognition. Trends in Cognitive Sciences, 5 (11), 479-486.
Sikström, S., & Jaber, M. (2002). The power integration diffusion (PID) model for production breaks. Journal of Experimental Psychology: Applied, 8(2), 118-126.
Sikström, S. & Söderlund, G. (2007). Stimulus dependent dopamine release in ADHD. Psychological Review, Vol. 114, No. 4, 1047–1075.
Deep learning neural networks models
LeCun Y., Bengio Y & Hinton G. Deep learning. (2015). Nature. doi:10.1038/nature14539
Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffry (2012). "ImageNet Classification with Deep Convolutional Neural Networks" (PDF). NIPS 2012: Neural Information Processing Systems, Lake Tahoe, Nevada. Archived (PDF) from the original on 2017-01-10. Retrieved 2017-05-24.
"Google's AlphaGo AI wins three-match series against the world's best Go player". TechCrunch. 25 May 2017. Archived from the original on 17 June 2018. Retrieved 17 June 2018.
Kosinski M., Wang Y. (2018). Deep Neural Networks Are More Accurate Than Humans at Detecting Sexual Orientation From Facial Images. Journal of Personality and Social Psychology February2018 Vol. 114 Issue 2 Pages 246–257. See also comments at: https://docs.google.com/document/d/11oGZ1Ke3wK9E3BtOFfGfUQuuaSMR8AO2WfWH3aVke6U/edit
Botvinick, M., Ritter, S., Wang, J. X., Kurth-Nelson, Z., Blundell, C., & Hassabis, D. (2019). Reinforcement learning, fast and slow. Trends in cognitive sciences, 23(5), 408-422.
Kosinski M., Stillwell D., & Graepel T. (2013). Private traits and attributes are predictable from digital records of human behavior. PNAS, 110 (15) 5802-5805, https://doi.org/10.1073/pnas.1218772110
Biological neural network models
Lansner, A., Marklund, P., Sikström, S., Nilsson, L-G (2013). Reactivation in Working Memory: An Attractor Network Model of Free Recall. PLOS ONE Aug 30;8(8):e73776. doi: 10.1371/journal.pone.0073776. eCollection 2013.
Neuronal Dynamics: From single neurons to networks and models of cognition. Wulfram Gerstner, Werner M. Kistler, Richard Naud and Liam Paninski. Available online: https://neuronaldynamics.epfl.ch/index.html Links to an external site.
Hwu, T., & Krichmar, J. L. (2020). A neural model of schemas and memory encoding. Biological cybernetics, 114(2), 169–186. https://doi.org/10.1007/s00422-019-00808-7 Links to an external site. (Claudia)
Spoerer, C. J., McClure, P., & Kriegeskorte, N. (2017). Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition. Frontiers in psychology https://doi Links to an external site..org/10.3389/fpsyg.2017.01551 Links to an external site. , 8, 1551.
Stoianov, I., & Zorzi, M. (2012). Emergence of a “visual number sense” in hierarchical generative models. Nature neuroscience, 15(2), 194-196. https://www.nature.com/articles/nn.2996 Links to an external site.
Application of AI methods in MR and EEG data
Mitchell, T. M., Shinkareva, S. V., Carlson, A., Chang, K. M., Malave, V. L., Mason, R. A., & Just, M. A. (2008). Predicting human brain activity associated with the meanings of nouns. Science, 320(5880), 1191-1195. doi:10.1126/science.1152876
Huth, A. G., De Heer, W. A., Griffiths, T. L., Theunissen, F. E., & Gallant, J. L. (2016). Natural speech reveals the semantic maps that tile human cerebral cortex. Nature, 532(7600), 453–458. doi:10.1038/nature17637
King, J. R., Wyart, V. (2021). The Human Brain Encodes a Chronicle of Visual Events at Each Instant of Time Through the Multiplexing of Traveling Waves. J Neurosci. Aug 25;41(34):7224-7233. doi: 10.1523/JNEUROSCI.2098-20.2021. Epub 2021 Apr 2. PMID: 33811150; PMCID: PMC8387111.
Li, X., Zhou, Y., Dvornek, N., Zhang, M., Gao, S., Zhuang, J., ... & Duncan, J. S. (2021). Braingnn: Interpretable brain graph neural network for fmri analysis. Medical Image Analysis, 74, 102233.
Sun, J., Cao, R., Zhou, M., Hussain, W., Wang, B., Xue, J., & Xiang, J. (2021). A hybrid deep neural network for classification of schizophrenia using EEG Data. Scientific reports, 11(1), 4706. https://doi.org/10.1038/s41598-021-83350-6 Links to an external site.
Rubin, T. N., Koyejo, O., Gorgolewski, K. J., Jones, M. N., Poldrack, R. A., & Yarkoni, T. (2017). Decoding brain activity using a large-scale probabilistic functional-anatomical atlas of human cognition. PLoS computational biology, 13(10), e1005649.
Natural language processing
Kjell, O., Kjell, K, Garcia, D., Sikström, S. (2019) Semantic Measures: Using Natural Language Processing to Measure, Differentiate and Describe Psychological Constructs. Psychological Methods.
Kosinsiki, M. (2023). Artificial Theory of Mind: Artificial intelligence can attribute mental states to others. https://arxiv.org/pdf/2302.02083 Links to an external site.
Charlesworth, T. E. S., Sanjeev, N., Hatzenbuehler, M. L., & Banaji, M. R. (2023, August 24). Identifying and Predicting Stereotype Change in Large Language Corpora: 72 Groups, 115 Years (1900–2015), and Four Text Sources. Journal of Personality and Social Psychology. Advance online publication. https://dx.doi.org/10.1037/pspa0000354 Links to an external site.
Sikström, S., & Garcia, D. (Eds.) (2020). Statistical semantics: Methods and applications. Springer International Publishing. https://doi.org/10.1007/978-3-030-37250-7 Links to an external site.
Devlin, J. Chang, M-W, Lee K., Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,. arXiv:1810.04805
Landauer, T. K., & Dumais, S. T. (1997). A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2), 211-240. doi:10.1037/0033-295x.104.2.211
AI and economics
Korinek, A.. LANGUAGE MODELS AND COGNITIVE AUTOMATION FOR ECONOMIC RESEARCH
http://www.nber.org/papers/w30957