Abstract – 2020 4th UPCE conference
Psychometric and neurocognitive perspective on measuring creativity
Mathias Benedek, University of Graz
The assessment of creativity using divergent thinking tests and similar measures has a long history, but there is considerable uncertainty in the field on how these tests should be best administered and scored. Recently, there have been increasing efforts to rigorously examine the psychometric quality of existing tests, and to explore new approaches capitalizing on advancements in cognitive modeling, computer testing, and natural language processing. Moreover, evidence from neuroscience offers new insights in the neurocognitive processes underlying creative thinking, which can inform more valid assessments. This presentation gives an overview over relevant findings and promising developments in the field of creativity assessment.
Creative problem solving and scientific thinking in primary schools: examples of quantitative and qualitative measures
Robin Willemsen, Radboud University Nijmegen
Various creativity researchers have expressed their concerns regarding the quantitative approach to measuring creativity. While adopting different approaches to measuring creativity, such as qualitative and mixed-method approaches, could lead to valuable information, most creativity research remains solely quantitative. We conducted a quantitative and qualitative study to examine the relationship between creative problem solving and scientific thinking among 5th grade pupils. The two main thinking modes of creative problem solving (i.e., divergent- and convergent thinking) have been theoretically linked to scientific thinking. Divergent thinking could, for example, increase the quality of research questions and convergent thinking could improve the drawn conclusions. As empirical evidence to support this notion is lacking, we first examined the relationship between divergent- and convergent thinking and scientific thinking skills in a quantitative research design tailored towards scientific rigor (N = 230). Subsequently, to unpack when and how these creative problem solving skills could be beneficial during the process of scientific inquiry, we conducted a qualitative study. Herein, we examined how and when divergent- and convergent thinking are used during a scientific inquiry task. Thirty-two pupils performed individual, semi-structured, scientific inquiry tasks in which ample opportunities for creative thought were provided. Specifically, pupils went through the process of orienting on the research topic, producing a research question, conducting experiments and drawing conclusions. Task duration differed between 20-40 minutes, depending on the amount of experiments the pupils wished to conduct. All children verbalized their thoughts and actions, while the interviewer asked supportive questions. We will analyze recordings of these conversations to determine when and how divergent and convergent thought can be beneficial during scientific inquiry. These two studies will provide examples of the strengths and relative weaknesses of quantitative and qualitative measures of creativity, which will form the basis for this idea incubator talk.
A systematic microgenetic measure for novelty and appropriateness: how generic is it?
Elisa Kupers, University of Groningen
Many creativity theorists see creativity as a socially and materially embedded, dynamic process. In despite of this, creativity is still measured as a latent, individual, stable trait in the vast majority of the empirical literature. Building upon socio-cultural and dynamic definitions of creativity, we designed an instrument to capture the two core characteristics of creativity (novelty and appropriateness) in microgenetic observational data of the creative process (Kupers, van Dijk, & Lehmann-Wermser, 2018). The idea behind this measure is that all actions and utterances of the participants are coded on ordinal scales of novelty and appropriateness. This allows for quantitative analyses of emergent creativity, such as sequential analyses and other time series techniques. We will discuss the validity of this measure when applying it to teacher-student interactions in two different educational contexts: a) kindergarten science education and b) upper primary school music education.
Expanding the creativity measurement landscape: idea evaluation, forecasting and idea selection
Simone M. Ritter, Radboud University
Creativity is not only defined but also measured in many different ways. Creativity research took a flight when Guilford (1950) proposed that creativity can be studied using divergent thinking measures (e.g., think of uses for a common object such as a brick), and that creative performance can be operationalized with variables such as fluency, originality, and flexibility. Once a set of ideas has been created, the more convergent part of the creative process takes place—idea evaluation, idea forecasting, and idea selection. These processes, however, have mostly been neglected in creativity research. This is problematic, as creativity’s potential to change the world also relies on the ability to get creative ideas implemented. Research has only recently turned its focus on idea evaluation, idea forecasting, and idea selection. The increased attention has resulted in a couple of scattered tasks, while common ground on how to measure idea evaluation, forecasting and idea selection performance is lacking. The beneficial effect of shared research paradigms has been observed in different research areas. For example, research on implicit attitudes has known a tremendous increase after the development of the ‘Implicit Attitude Test’ as the shared research paradigm. An interesting avenue to investigate is whether virtual reality can be used as a tool to measure idea evaluation, forecasting and idea selection. In addition to measuring actual evaluation and selection performance, virtual reality allows gathering more indirect—but potentially highly relevant—information such as viewing time (i.e., duration of engage with an idea) and viewing distance (i.e., indicating an approach or avoidance motivation towards an idea). The current presentation focuses on the importance of measuring idea evaluation, idea forecasting and idea selection performance. Moreover, the potential role of virtual reality in this more convergent creativity measurement landscape is discussed.
Teaching Creative Reading with Electronic Literature
Inge van de Ven, Tilburg University
Contemporary technological developments have drastically increased the number of texts available through different media, leading to new reading habits. A shift has occurred from close reading, sustained and focused attention to the text, to hyperreading, non-linear, computer-assisted modes of reading such as skimming and scanning. Consequently, some fear, young people are losing the ability to concentrate. Meanwhile, print literacy skills obtained in school do not converge with digital reading tasks performed outside the classroom for leisure. The shift to online reading, therefore, entails a challenge for literature education: how to guide students in these new reading habits?
In this paper, I argue for the integration of close and hyperreading in literature education. I propose to come to such an integration through the construct of creative reading: a reading that oscillates between these two strategies. Today’s multiform textual abundance demands an active attitude, asking readers to switch between sources; it involves a high degree of creativity in combining different modes of reading and choosing the appropriate mode of reading within a given situation. I draw on two case studies of electronic, multimodal literature to create a training environment for creative literacy: Pry (Tender Claws, 2014) and Ice-Bound (Reid & Garbe, 2014). I show how such texts solicit new modes of reading as they mimic our multimodal engagement with information in contemporary media spaces; yet they do so within the confines of a single work that offers a controlled and controllable environment. They form an ideal, unexplored site for developing a synergetic approach to print and digital literacies in secondary school education. I reflect on possible uses of such multimodal works in literary education in secondary schools, and on possible methods to measure readers’ creativity in their engagements with these works.
Creating and measuring creativity tasks with Cognitive AI systems
Ana-Maria Olteteanu, Free University Berlin
Cognitive AI systems are interdisciplinary computational tools, used (i) to model a domain of cognition in cognitive psychology and (ii) to provide cognitive inspiration and new AI methods in artificial intelligence. Creativity is currently measured with a set of tasks, most of which are constructed, validated and assessed manually. However, this approach suffers from a set of disadvantages: manually constructed sets of stimuli are generally hard to compare to other sets, or use in meta-analyses across different languages; parameters related to stimulus difficulty are hard to assess in the context of small sets of test queries. The computational creation and assessment of creativity tasks is a potential solution to these difficulties. Computational approaches may also provide a more efficient way of controlling parameters and thus enable more sophisticated creativity assessment designs. In this talk, we argue that cognitive AI systems can be used for this purpose. Examples with the Remote Associates Test and the Alternative Uses Test from our own work are showcased. The place of computational methods in the creativity measurement landscape is discussed.
Why do we measure creativity?
Vlad Glaveanu, Webster University Geneva
There are many discussions today, as there have been for the past decades, as to what creativity ‘is’ and how it can best be evaluated. There are many pragmatic reasons why we engage in the scientific effort to define and measure creativity, including the early identification of talent and the need to assess creativity somehow in empirical research. But the question of ‘why we measure creativity’ is worth asking again, with a different focus, on epistemology and ethics. If we are to start from a sociocultural standpoint that phenomena like creativity are, at once, individual and cultural, then the ‘why’ question leads us to deeper interrogations concerning the nature of objectivity, the role of cultural relativity, the ethics of measurement and, more generally, the status of the knowledge we produce, by and large, in creativity research. Without exhausting these topics, I will offer here a few sociocultural considerations as to the ‘why’ of creativity measurement and, more widely, of measurement in psychology and in education.
Divergent Thinking in Four-Year-Old Children: An Analysis of Children’s Thinking Processes in Performing the Alternative Uses Task
Honghong Bai, Utrecht University
In this talk, I will present partial results of a study in which we examined the divergent thinking (DT) processes of four-year-old preschoolers, and propose an alternative way of measuring DT – assessing the thinking processes that underly novel idea generation. In this study we encouraged children to report their thinking processes through interactive dialogues while performing a widely used DT task, the Alternative Uses Task (AUT). Content analysis of children’s utterances revealed that one DT process, what we coded as Performing mental operations on the stimulus, uniquely predicted children’s originality scores. In addition, a more detailed analysis into the process of idea generation during ATU further revealed that both the originality of children’s responses and the occurring probability of the DT process Performing operations on the stimulus showed serial order effects, with similar patterns. In this regard, we propose to measure children’s DT by analyzing their thinking process, using, for instance, the occurring frequency of some processes that underly original thinking as indicators.
Improving the Assessment of Creativity with Generalizability Theory
Mare van Hooijdonk, Utrecht University
The assessment of creativity is challenging. Both the creativity tasks that are used for assessment purposes and the raters who assess those tasks introduce variation in student scores that do not necessarily reflect actual differences in students’ creativity. This may bias decisions about students’ creativity levels. When researchers evaluate creativity assessment procedures, they often inspect tasks and raters separately. Within this flashing session, I would like to demonstrate that the use of generalizability theory allows researchers to investigate creativity assessment procedures more comprehensively and in an integrated way. I will first provide a short introduction to Brennan’s generalizability theory. Then, we give an example of the usefulness of this theory with an analysis of creative problem solving tasks. We highlight how alternations in the assessment procedure such as changing the number of tasks or raters may affect the quality of creativity test scores.
Measuring the Effectiveness of Creativity Training
Xiaojing Gu, Radboud University
Creativity is among the most desired work and life skills in the 21st century. To meet the high demand, interest in creativity training has increased substantially over the past years. But how do we know if a training program actually enhances an individual’s creative performance or if some training programs work better than others? Measuring the change in indicators of creative performance is an important way to examine training effectiveness. However, the measurement has been considered problematic because of using identical tests in pre- and post-measure or failing to reflect the training objective. We addressed these limitations while conducting creativity training studies. In our research, we developed several short and long-term training programs to foster creativity among children, students, and adults in the classroom or online setting. The success of our training programs was measured using a quasi-experimental design and well-validated creativity tests focusing on several aspects of creative thinking skills (i.e., divergent thinking, convergent thinking). During this presentation, I will talk about the theoretical framework, the techniques employed, and notably the effectiveness of our training programs.
Measuring Mathematical Creativity
Meier1, J. Burgstaller1, S. E. Vogel, & R. H. Grabner1, University of Graz
When thinking about creativity, one of the first things that will come into one’s mind is arts (e.g., music, painting, literature); the thought about creativity in scientific domains (e.g., mathematics) rarely arises. Nonetheless, mathematical creativity has come to be considered an essential skill that should be encouraged starting from an early age. While there are many definitions of mathematical creativity the most common ones (e.g., mathematical creativity is cognitive flexibility, which enables divergent production of mathematical solutions) are strongly connected to the measurement of mathematical creativity through divergent thinking tasks. In divergent thinking tasks, multiple solutions for one problem are generated. The generated solutions are judged by three parameters: fluency (how many different ideas were generated), flexibility (how many different types of ideas were generated) and originality (how rare/creative is each idea). To our knowledge, there is only research on mathematical creativity in children and adolescents, showing that mathematical creativity correlates with mathematical abilities as well as with other cognitive abilities (e.g., general creativity, intelligence). Based on mathematical creativity tasks used in children, we developed a mathematical creativity test for adults, and validated it within 103 Austrian students. Results regarding the reliability of the instrument as well as associations between mathematical creativity, mathematical abilities, intelligence and general creativity will be presented. However, more research on mathematical creativity is needed to gain a deeper understanding of this construct, not only in children, but also in adults.