566
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Development of an exploratory creativity assessment scale

, , , &
Pages 101-117 | Received 11 Jan 2023, Accepted 06 Feb 2024, Published online: 03 Mar 2024

ABSTRACT

Exploratory creativity (E-creativity) can be achieved by searching an area of conceptual space governed by specific rules. Existing studies on E-creativity mainly focus on how to use aspects of E-creativity to develop computational creativity tools, but E-creativity assessment scales have not been fully studied. To fill in the gap, this study developed an E-creativity assessment scale based on metrics and experimental determination studies. Eight indexes are promoted through literature investigation, which are related to E-creativity attributes, pre-requirement for the existence of E-creativity, relations between exploratory process and creativity, and results of E-creativity. Then, an empirical case study is applied to investigate the differences between nonprofessionals and professionals when using the developed scale. From the whole research, the results reveal that E-creativity is not simply related to the exploratory process and its concept space; instead, it is also related to the relations between the novelty of the exploratory process and the concept space of E-creativity. The results reflect the role of E-creativity in a creative process. This research provides a further understanding of E-creativity, which can contribute to further develop the definition of E-creativity. The E-creativity assessment scales can be used as a cue to further evaluate machine generated E-creativity.

1. Introduction

Significant efforts have been devoted to the research on creativity. Creativity, as the core term of this research area, has been widely considered and defined by various researchers. For example, creativity is defined as the production of something new and valuable (Childs et al., Citation2022; Neihart, Citation1998; Nembhard & Lee, Citation2017; M. A. Runco & Jaeger, Citation2012); the process of producing novel and useful ideas (Childs et al., Citation2006; Sahu & Mukherjee, Citation2013); or the ability to produce something novel and valuable within a social context (Van Goch, Citation2018). Rhodes (Citation1961) summarized definition of creativity and proposed the 4P theory to define such a complex and multifaceted phenomenon. Based on the 4P theory, creativity is divided into four strands – person, process, press and products. The 4P theory has received much attention and sparked further research. For example, Park et al. (Citation2016) extended the potential of the 4P theory in neuroscience studies; Walia reviewed the definition of creativity in 2019 (Walia, Citation2019) based on the development of the 4P theory and Sternberg and Karami (Citation2022) have expanded consideration of creativity to include purpose, press, person, problem, process, product, propulsion and public.

Evaluation of creativity is an active area in creativity research. A variety of creativity assessment methods have been proposed, which generally require human raters to judge the quality of generated creativity (Beaty & Johnson, Citation2021), such as the Consensual Assessment Technique method (Amabile, Citation1982), Creative Product Semantic Scale (CPSS; Besemer, Citation1984; Besemer & O’Quin, Citation1986), Product Creativity Measurement Instrument (PCMI; Horn & Salvendy, Citation2006), and Creative Solution Diagnosis Scale (CSDS; Cropley & Cropley, Citation2005).

With the advent of artificial intelligence, Boden (Citation1998) developed the concept of computational creativity to include combinational creativity, exploratory creativity (E-creativity) and transformational creativity (T-creativity). E-creativity, as one aspect of computational creativity, is related to the generation of novel ideas by the exploration of structured conceptual spaces. E-creativity has demonstrated significant potential for possible findings that can help stimulate design innovation for practical value (Maiden et al., Citation2010; Yin et al., Citation2022). At present, E-creativity research has mostly focused on developing tools based on E-creativity principles, such as ‘Ludoscope,’ ‘Black box,’ ‘Narrative Search,’ and ‘DeLeNoX,’ which develops for textual narrative or creative design practice.

Although the theoretical framework of E-creativity has been discussed and developed, there is still a lack of research on E-creativity assessment scale. Contributions mostly focus on developing tools which based on E-creativity to stimulate creativity. The E-creativity evaluation process is usually carried out to test algorithm improvement compared with other models, leaving a gap of proposing standard norms for evaluating E-creativity based on its characteristics.

To summarize, E-creativity assessment scales have not been fully developed. This may hinder future research in the evaluation and automation process for E-creativity. This study, thus, aims to investigate E-creativity assessment indexes. Through metrics identification and an experiment, the study promotes an eight-index E-creativity assessment scale. A case study is then conducted to identify the gap in evaluation process between the nonprofessional and professional. The research contributes to the development of E-creativity assessment. Designers, researchers, and developers can assess the E-creativity of product in a standardized way based on the promoted scale.

2. Literature review

This section reviews creativity and E-creativity, creativity assessment, and existing E-creativity research.

2.1. Creativity and E-creativity

Creativity is one of the fundamental human competencies. As a branch of computer science, artificial intelligence (AI) uses algorithms and machine learning technologies to replicate or simulate human intelligence (Marrone et al., Citation2022). Considering the importance of creativity in human and the development of AI, computational creativity is promoted with the goal of modeling, simulating or replicating creativity using a computer (Colton & Wiggins, Citation2012; Iqbal, Citation2022).

E-creativity is related to generating new ideas within a given conceptual space (Liapis et al., Citation2013), which can be achieved through constraint satisfaction techniques or evolutionary algorithms. Not only existing in computational creativity, E-creativity also exists in human creativity (Boden, Citation1998; Guckelsberger et al., Citation2021). The definition of E-creativity is thus updated to the one which can be used to represent both human and computational E-creativity, to be specific, searching an area of conceptual space governed by certain rules (Riedl & Young, Citation2006). The meaning of ‘rules’ is further clarified by Wiggins (Citation2006) which is reasoning to an inference within the search. Conceptual spaces are explained as the gather of creation of new ideas (Rebelo et al., Citation2022). Based on the definition of E-creativity, it can be identified that E-creativity is a special form of creativity.

2.2. Creativity assessment

With awareness of individual differences, creativity assessment measures have received considerable attention. Without an effective scale for measuring creativity, it may be difficult to establish the validity of a particular finding. Self-reporting is an essential method used for assessing personal creativity. By summarizing the personality characteristics of creative people, psychometric inventory tools which can be used to assess personal creativity have been developed and promoted such as Epstein Creativity Competencies Inventory for Individuals (ECCI-i) (Epstein et al., Citation2008). Press creativity aims to identify how environment affects the creative performance. Factors which were recognized as the influences were identified and then verified such as Swedish Creative Climate Questionnaire (CCQ) (Ekvall, Citation1996). In product creativity assessment, there are two principal ways to assess product creativity. Some researchers thought creativity can be assessed subjectively by appropriate judgers. Based on this theory, the Consensual Assessment Technique (CAT) (Amabile, Citation1982) required experts to independently rate generated ideas, artifacts or other forms of creative thinking products. Other researchers have proposed to use a set of metrics such as novelty, value, appropriateness, and infrequency to evaluate product creativity (Yin et al., Citation2021), such as Creative Product Semantic Scale (CPSS) (Besemer, Citation1984; Besemer & O’Quin, Citation1986). Process creativity is mainly assessed with a divergent thinking (DT) test, such as Torrance Tests of Creative Thinking (TTCT) (Torrance, Citation1966). Based on DT, inventory tools were also developed to assess process creativity such as Runco Ideational Behaviour Scale (RIBS) (M. Runco et al., Citation2001) and Cognitive Processes Associated with Creativity (CPAC) (Miller, Citation2014). Some researchers have promoted to use the attributes of creativity to assess creativity. For example, considering novelty is an important attribute of creativity, some researchers proposed to use the novelty level to represent creativity levels.

These creativity-oriented assessment methods indicated the importance of measuring creativity, which inherently triggers researchers’ attention to E-creativity measurement. Also, these methods can be used as the cue to discover the indexes of E-creativity assessment scale.

2.3. Existing E-creativity assessment methods

Existing research has not fully detected the E-creativity assessment methods. Most of the studies aim to develop creativity stimulation tools based on E-creativity knowledge. Then, some studies included a part which tended to verify whether the E-creativity-based tools can stimulate creativity.

Dormans and Leijnen (Citation2013) tended to verify whether their developed E-creativity-based tool (Ludoscope) worked. They conducted two studies. In the first study, the Ludoscope was asked to generate the content of the games step by step based on different grammar. The second study asked Ludoscope to generate lock and key mission structures. Then, the resolution step asked the Ludoscope to generate a structure where each key is associated with at least one lock. Whether the resolution solutions were more novelty was used to represent the performance of E-creativity. Liapis et al. (Citation2013) tried to verified whether their developed E-creativity-based tool ‘DeLeNoX’ worked. The study asked DeLeNoX to detect the iteratively transformed exploration of spaceships. The novelty and diversity of the interative results were ranked to identify whether the E-creativity tools worked. Jennings et al. (Citation2011) tried to assess whether the technology, which can be based on blind variation, selective retention, and principles of E-creativity to generate landscapes, worked. The study asked the technology to generate 25 images and asked 40 participants to assess the quality of the images based on six criteria ‘bright, aesthetic, artistic, captivating, convoluted, and harmonious.’

2.4. Research gap and aims

Although there have been studies assessing E-creativity, most aim to assess effectiveness of the developed tools instead of assessing E-creativity itself. Also, human factors are ignored in the assessment. In addition, although various creativity assessment methods have been promoted, creativity assessment methods cannot be applied to E-creativity assessment directly. According to M. A. Boden (Citation1998), E-creativity is a special form of creativity as well as an indication of capability. Therefore, the creativity assessment criteria cannot report the characteristics of E-creativity accurately. These limitations in E-creativity assessment method may obstruct the development of E-creativity research. To fill in the research gaps, this study attempts to develop an E-creativity assessment scale. The results of this study are expected to be used as a basis for professional and nonprofessional people to assess E-creativity and assist researchers further understand E-creativity.

The reason why the scale-based approach is considered is out of the following consideration. Since now there are no existing E-creativity assessment examples that can be learned from, some inspiration was sought from creativity assessment methods. From the review of Section 2.2 which is about the creativity assessment methods, it can be found that the scale-based approach is the core creativity assessment approach (such as ECCI-i, CCQ, CPSS, TTCT, and CPAC). Therefore, the scale was selected as the assessment frame. In addition, the scale-based approach can contribute to the assessment methods from the following two aspects. On the one hand, the scale-based approach allowed researchers to assess E-creativity quantitatively (Forgeard, Citation2022). This enables researchers and practitioners to analyze the E-creativity levels, conducted statistical analysis and draw meaningful conclusions from the data. On the other hand, the scale-based approach provided a standardization on how to assess E-creativity (Jarosewich et al., Citation2002). This makes various raters use the consistent criteria to assess E-creativity. The assessment results thus can be more reliable and the subjective effects of raters can be reduced.

3. Exploratory creativity assessment scale development

shows an overview of the development processes, and two phases are involved: metrics identification and experimental determination, which are described in the following sections.

Figure 1. The development phases of E-creativity assessment scale.

Figure 1. The development phases of E-creativity assessment scale.

3.1. Phase I: metrics identification

Metrics identification has two steps: dimension development and instruction structure building-up. Dimension development is performed to identify the potential dimensions of E-creativity assessment scales. Followed by the detection of the potential indexes of E-creativity assessment scales. The initial instrument structure thus can be developed.

3.1.1. Dimension development

The original dimensions of E-creativity can be extracted from its definitions (Horn & Salvendy, Citation2006). Specifically, we searched the E-creativity-related papers in Google Scholar using the keywords ‘E-creativity’ and ‘exploratory creativity’ to identify the definition of E-creativity. Among the retrieved definitions, repeated and paraphrased definitions were removed from our list. Eventually, seven mostly related and representative definitions were selected (). Six dimensions – rules, feature, structure, concept space, exploratory process, and results – were then summarized from these definitions.

Table 1. Six dimensions are extracted from the definitions of literature.

Here, we used one example about how we identified ‘Conceptual space’ and ‘Exploratory process’ dimensions from M. A. Boden’s (Citation2009, p. 25) definition to better explain how the various dimensions were identified and how were these extracted from the various definitions. First, we searched the E-creativity-related papers in Google Scholar using the keywords ‘E-creativity’ and ‘exploratory creativity’ to identify the definition of E-creativity. One of the E-creativity definitions has been promoted by M. A. Boden (Citation2009, p. 25) which is ‘The person moves through the space, exploring it to find out what’s there (including previously unvisited locations) and, in the most interesting cases, to discover both the potential and the limits of the space in question.’ From this definition, the sentence can be divided as ‘the person moves through the space,’ ‘exploring it to find out what’s there (including previously unvisited locations),’ and ‘discover both the potential and the limits of the space in question.’ Based on the content, ‘the person moves through the space,’ are summarized as the ‘Conceptual space.’ ‘Exploring it to find out what’s there (including previously unvisited locations)’ and ‘discover both the potential and the limits of the space in question’ were summarized as ‘Exploratory process.’ Therefore, the definition of Boden included two dimensions which are ‘Conceptual space’ and ‘Exploratory process.’

Rules refer to the requirement for the changing point of E-creativity, such as the change of phone size. It indicates where the concepts are expected to change and give a guideline on which exploratory process should focus (Riedl & Young, Citation2006). Feature is the attribute of the concepts which can be detected from five senses, such as color and shape. When focused on the same rule, the features may be different. In other words, features can detail the changes among the generated concept. The changes in the features can indirectly report what has happened in an exploratory process (Karimi et al., Citation2020). Structure is the component of the generated concepts. It indicates how the exploratory processes build up the relations among the features (Karimi et al., Citation2020). Concept spaces are structured styles of thought. Although many possible thoughts exist within the given conceptual space, only some of them may actually have been thought (M. A. Boden, Citation2004; Wiggins, Citation2006). This is the boundary of E-creativity. If the concepts exceed this space, creativity will change from E-creativity to T-creativity. Exploratory process refers to the reasoning to an inference within the search (Wiggins, Citation2006). This is the core of E-creativity. Results refer to the E-creativity results. E-creativity needs to be displayed (Hung & Choy, Citation2013). In this way external people can assess it. Therefore, the E-creativity results are essential.

The six dimensions can also be explained from the information processing process in E-creativity assessment (Horn & Salvendy, Citation2006). When raters are asked to assess E-creativity, final E-creativity results and the design points are provided to raters (Lu & Luh, Citation2012). Raters need to understand the exploratory process behind E-creativity first and then assess E-creativity (Weiss & Wilhelm, Citation2020). This understanding includes the rules, features, structure, and concept spaces.

3.1.2. Instrument structure

The instrument is constructed based on the six dimensions, we can further build up the instrument for developing the E-creativity assessment scale. Specifically, we searched the keywords of the six dimensions (‘rule(s),’ ‘feature,’ ‘structure,’ ‘concept space,’ ‘exploratory process,’ and ‘result(s)’) within the papers retrieved in Section 3.1. Then, transformed the statements into antonyms. For example, we have identified that from Hung and Choy (Citation2013), the E-creativity definition has been promoted. Therefore, we assumed that Hung and Choy (Citation2013) may include the indexes which can be used to assess E-creativity. Then, we used ‘rule(s),’ ‘feature,’ ‘structure,’ ‘concept space,’ ‘exploratory process,’ and ‘result(s)’ as the keywords to search in the paper. We have found that when we used the ‘rules’ in the search, there was one statement that ‘Creativity Support Tools (a kind of tools which can be developed based on E-creativity) can be governed by the same set of rules (Hung & Choy, Citation2013, p. 3)’. This may indicate that one of the indexes of E-creativity is ‘governed by the same set of rules’. Since the study tended to use antonyms to represent the assessment index, the expression was transformed as ‘Follow the same rules lead to the same results – Follow the same rules lead to different results’.

Forty-two related statements were summarized from this method. Forty-two pairs of antonyms are then transformed from the statements. The whole 42 pairs and the statements are listed in Appendix 1. Examples on how the pairs are summarized from the statements are given in . Among the 42 pairs, the amount of each dimensions is respectively: seven in ‘rules,’ five in ‘feature,’ five in ‘structure,’ seven in ‘conceptual space,’ 13 in ‘exploratory process,’ and five in ‘results.’

Table 2. The examples of the initial E-creativity assessment scale. Each dimension takes one indexed pair as an example.

Notably, the reason why we extracted antonyms from statements is that we want to reduce the bias from users on understanding the statements and make users understand the indexes more easily (Horn & Salvendy, Citation2006). This has been proved by various creativity assessment methods which used antonyms as the assessment index, such as Creative Product Semantic Scale (Besemer & O’Quin, Citation1986) and product creativity measurement instrument (PCMI; Horn & Salvendy, Citation2006).

3.2. Phase II: experimental determination

Two studies are conducted to determine the effective indexes in this section. The experts evaluation is first applied to judge all the antonyms from a professional perspective. Then cases were collected to verify the effectiveness of selected pairs of antonyms in a large-scale study, in which exploratory factor analysis (EFA) was applied to the analysis of study results (Groeneveld et al., Citation2022; Hazeri, Citation2019).

3.2.1. Experts evaluation

In the evaluation, four experts (aged 26–30; two males; two females) who are professional in design and creativity, are invited. The inclusion of four experts is that (i) the experts should have at least five-year experience in the design field (Kim et al., Citation2019). (ii) The experts are knowledgeable about computational creativity. (iii) The experts should have experience in using creative assessment scales. Experts need to conform to all the three criteria.

The reason why four experts were selected to perform the experts evaluation task is that this task aims to detect whether the promoted criteria can be used as the E-creativity assessment criteria from experts levels. Considering experts are not an easily accessible group, the participants number of this experts evaluation task is hard to achieve a high amount. The exact participants number of this study is learned from Miller (Citation2014) who develop a creativity assessment scale. In Miller’s paper, two experts were recruited to finish the experts evaluation. Since the subjective risk in the limited number has been realized, four participants were recruited. However, the limited number of experts involved may introduce subjective biases and potentially diminish the study’s reliability

To ensure all experts understand what E-creativity is, the definition of E-creativity was given and explained. To be specific, E-creativity is defined as a kind of computational creativity which can generate novel ideas through the exploration of structured conceptual spaces. Then the experts were asked to assess the forty-two pairs of antonyms using a 5-point Likert scale to report their views on whether they think each pair of antonyms can be used to represent E-creativity (A score of one indicates the pair cannot represent completely a score of five score indicates complete representation). The evaluation lasted about ten minutes per expert. The results of the forty-two pairs are listed in Appendix 2, ranging from 1.75 to 4.75 (SD = 0.635). Cronbach’s alpha is 0.903, which indicates an excellent level of internal consistency.

The average score over three (not inclusive) was determined as the borderline to identify whether the antonyms can be used to assess E-creativity. The reason is that this justification is based on the 5-point Likert, where a score of three represents a moderate, neutral stance, indicating no clear tendency. A score which is less than three means a negative attitude toward whether the antonyms can be used to assess E-creativity. After the filtering, 28 valid pairs were left, which were marked with an asterisk in Appendix 2.

3.2.2. Design case selection

After the twenty-eight pairs were selected, they are further tested in design cases. Ten suitable E-creativity design cases were first identified in the scale development study with pre-defined criteria. The criteria are as follows: (i) the design point is clear and detailed; (ii) the idea development process can be summarized in a clear, brief, and understandable way; (iii) the idea development is not determined by mechanical or technical changes. Each design case was explained based on design areas, design points, exploratory process, and visual design case. The ten examples are collected from different product categories. One of the design cases is shown in . All of the ten design cases are shown in Appendix 3.

Figure 2. A design case for E-creativity.

Figure 2. A design case for E-creativity.

Nine participants (aged 20–25, three males, six females) who have design background were recruited. They were asked to evaluate whether the design case can be used as the case for E-creativity scale development. To be specific, the ten design cases were displayed in a random order. Participants first scrutinized each design case individually. They were then asked to use a 7-point Likert scale to report whether they thought this design case is good enough as the case for E-creativity scale development (one means it is absolutely unsuitable; seven means it is a perfect design case). The whole study lasted about ten minutes for each participant. The results of ten design cases are shown in . The average score of the ten design cases ranged from 3.56 to 5.56. The Cronbach’s alpha is 0.784, which indicated that the internal consistency is in an acceptable level. As a result, the top five ranked design cases were selected to be used in the following study.

Table 3. The mean value and SD results of ten design cases.

It is notable that existing research has several design cases. However, the design case of this study is based on product and graphic design. The case of existing research thus may not be a good selection. For example, the example from Dormans and Leijnen (Citation2013) is based on game content; the example of Liapis et al. (Citation2013) is based on shapes; the example of Jennings et al. (Citation2011) is based on landscapes. Although games, shapes, and landscapes are categorized as design cases, our study specifically concentrated on product and graphic design. Additionally, the study tried to select practical case designs as examples, while the literature cases were all based on the authors’ own developed tools. Due to these two considerations, design cases from the literature review were not used as references in our study.

3.2.3. Scale development based on design cases

A case-based scale development is conducted to further test whether the filtered 28 pairs are suitable to be the E-creativity assessment indexes in the five design cases. A total of 116 participants (57 male, 59 female; aged 18–26) who are nonprofessional in design and creativity were recruited. They were asked to assess whether the given antonyms can be used to express E-creativity in the given design cases. If they thought an antonym can be used to express E-creativity in a given design case, they were required to choose the more appropriate word of the antonym pair and then rate its expressiveness on a scale from one to three (Score one means weak expression, Score two means mediate expression, Score three means strong expression). If they thought the antonym cannot be used to express, they need to select zero. There are five design cases and each design case has 28 pairs to be justified. The whole study lasted about 25 min for each participant.

Notably, we recruited nonprofessional participants to further identify E-creativity assessment indexes because E-creativity is not exclusive to professionals. Instead, it is also present in the ideation processes of nonprofessionals. To make the E-creativity assessment scale more universally applicable, we included nonprofessionals in determining the ratings.

3.2.4. Results

In the case study, there is no respondent with zero variation (SD = 0 or respondents who answered the exact same value for every question in the questionnaire). Therefore, the evaluations performed by 116 raters were checked to be valid. The skewness for the entire dataset ranged between −0.913 and 0.185 and the kurtosis ranged between −1.462 and −0.179. This indicates that the data is normally distributed (Field, Citation2013).

Based on the Independent Samples Test, the differences between females and males are not statistically significant (95% CI [−0.17658, 0.08175], t(114) = −0.727, p > 0.788). The inter-judge reliability has been calculated using Cronbach’s coefficient alpha. The consistency among the raters was also assessed and the results are shown in . The overall Cronbach’s coefficient alpha is 0.839 which indicated a ‘good’ level of internal consistency.

Table 4. Inter-judge reliability among five design cases.

The results were analyzed with the exploratory factor analysis (EFA). Prior to it, the degree of factorability of the measurement variables was initially evaluated. The Kaiser–Meyer–Olkin (KMO) measure of sample adequacy was 0.885, which indicated the sample adequacy was meritorious and greater than the commonly recommended threshold of 0.6 (Hair et al., Citation2010). Bartlett’s tTest of Sphericity was also significant (x2 (378) = 5723.463, p = 0.000 < 0.001).

After applying the EFA, based on the screen plot, four factors were suggested and 47.51% of the variance can be explained. The oblique rotation algorithm was employed to interpret the underlying structure of the four-factor. All items with absolute loadings smaller than 0.3 were considered insignificant and hence removed (Horn & Salvendy, Citation2009). The items with more than one component, have absolute loading of 0.3 more, this may mean the items are related to different components and thus are not the best choices as the assessment indexes and thus were removed. Fifteen pairs were removed due to either low loadings or cross-loadings higher than the cutoff value (0.3) (Cropley & Kaufman, Citation2012). In consequence, eight pairs of antonyms were identified as the scale items for E-creativity. The exact results of the four-factor model extracted from the EFA analysis are shown in , in which the pairs are represented with index numbers. The whole correspondence relations between the index number and the pair can be found in Appendix 1.

Table 5. The four-factor model extracted from the EFA analysis.

A confirmatory factor analysis was then conducted to check the validity of the four components suggested by the EFA analysis. The results are shown in . The covariance between Component 1 and 2 is 0.465 (<0.6), which means the Component 1 and Component 2 are independent.

Figure 3. Confirmatory factor analysis results of the four components.

Figure 3. Confirmatory factor analysis results of the four components.

For Component 1, the value of the chi-square test is significant (x2 (65) = 318.254, p = 0.000 < 0.001). The ratio of chi-square to the degree of freedom (CMIN/DF) is 4.896, which indicates that the component is in reasonable fit. The goodness-of-fit index (GFI) is 0.915, which indicates an acceptable fit between the implied component and the observed data. The value of the root means square error of approximation (RMSEA) is attained at 0.000 which indicates a good fit between the implied covariance metrics and the observed data.

For Component 2, the value of the chi-square test is significant (x2 (2) = 10.385, p = 0.000 < 0.001). The CMIN/DF is 5.192. The GFI is 0.991; The CFI is 0.878; The NFI is 0.861. These results indicates that the results of Component 2 is in a reasonable level. The value of RMSEA is attained at 0.085, which indicates a fit between the implied covariance metrics and the observed data.

From the above analysis, the pairs whose covariance between components and index are over 0.6 are selected as the E-creativity assessment scale (Hoque et al., Citation2018). The EFA results of the eight indexes are all positive values. This indicated that the positive and negative side for the eight indexes are suitable. The selected eight indexes are as shown in .

Table 6. The eight indexes which can be used to assess E-creativity.

4. Discussion

The experts evaluation and case study identified the indexes that can be used to report the existence of E-creativity. The rationale for the identified eight indexes is discussed as follows. Since E-creativity is related to following the rules to generate different creative ideas within a relevant conceptual space, Index 1 (which is related to rule dimension) and Index 22 (which is related to conceptual space dimension) are the pre-requirement of the existence of E-creativity. Index 23 is related to conceptual space dimension, but it is also the combination of exploratory process and the creativity. It suggests the existence of relations between creative levels and generated concepts in an exploratory process. As E-creativity is a special form of creativity, it needs to satisfy the basic requirements of creativity, which can be reflected by Index 4, Index 10, and Index 15 (Hung & Choy, Citation2013). Index 33 and Index 34 identify the existence of E-creativity from the perspective of creative results. The changes in ideas are originated from E-creativity and they can reflect the existence of E-creativity at some levels.

The E-creativity assessment indexes can be used to re-define E-creativity. From the results in the experimental determination phase, E-creativity is a novel concept-changing process. During its process, people can explore various concepts which have different features based on the same design point. The results of the experimental determination phase indicated the implicit relations and interaction among ideas behind the E-creativity process.

Although our studies and analysis provide preliminary evidence for validity of the scale, there are some statistic limitations throughout the research. For example, there are two components which only included one criterion. This limits the analysis on its significant levels and interaction with other components. In addition, the order of positive and negative phrases in each pair is fixed which may have an impact on the participants’ decisions, so more investigation is needed to understand this impact. Furthermore, the assessment scales were summarized from experts’ judgment. It is notable that limited research that we can learn from on what the E-creativity assessment criteria are. This is also the aim of our study – developing the E-creativity assessment scales to fill the existing gap in E-creativity assessment. Therefore, we followed the process suggested by Horn and Salvendy (Citation2006) who get the assessment criteria by asking experts to justify whether it is the suitable assessment criteria based on cases. In other words, no literature somehow can confirm what the experts said. Also, although E-creativity could exist in various forms (such as games, shapes, and landscapes), our study focused on product and graphic design as examples. While we aim for the scale to apply broadly, it was only tested in these two areas. This limitation raises questions about its universality in other E-creativity design contexts. In the future, more areas will be considered.

Moreover, it is notable that the six dimensions are all from the definition of E-creativity. This is a method used by existing creativity assessment scale development processes (Horn & Salvendy, Citation2006). The dimensions beyond the definition are not included because the quality of the dimensions cannot be guaranteed. We applied this method and reviewed existing E-creativity definition. We searched the E-creativity-related papers in Google Scholar using the keywords ‘E-creativity’ and ‘exploratory creativity’ to identify the definition of E-creativity. We stopped the search until no more E-creativity definition appeared, ensuring a comprehensive collection. Thus, we considered the six dimensions are encompassing. However, we acknowledged that in the future, more methods to validate the scope of the dimensions should be considered. Finally, due to the limited research on the factors of E-creativity, the initial forty-two indexes and six dimensions may be not inclusive enough to represent the potential indexes of E-creativity completely.

5. Empirical study

This empirical study was conducted to compare the performances of professional and nonprofessional raters when using the scale. Specifically, we try to identify the differences in assessment scores and index results between nonprofessional and professional raters, as well as to compare their internal consistencies when using the scale to assess E-creativity.

5.1. Method

Twenty-three professional raters (fourteen female, nine male) and 37 nonprofessional raters (eighteen female, nineteen male) are recruited in this study. Professional raters are those who have more than five years experience in creativity and design while nonprofessional raters are those who have less than three years experience in creativity and design. The empirical study was designed and then conducted via an online questionnaire platform.

At the beginning of the questionnaire, participants were given an introduction about what E-creativity is and the concepts that would be used in the study. Then, participants were asked to assess to what degree the given antonyms can be used to express E-creativity in the given design cases. If they thought an antonym pair can be used to express E-creativity in the given design case, they needed to select which term of the pair is better and then indicated the degree from one to three (one means weak, while three means strong). If they thought the antonym could not be used to express the E-creativity of the given design case, they needed to suggest zero. An example case was given to ensure all participants can understand the assessment process. In the questionnaire, there were three design cases () and each one has eight pairs (listed in ) to be judged. It took around 10 min to finish the questionnaire for each rater.

Figure 4. Design cases used for empirical study.

Figure 4. Design cases used for empirical study.

6. Results

Each index is a pair of antonym and only one in the antonym has the trend to represent the existence of E-creativity. For the convenience of analysis, the positive side degree is marked with a positive sign (1, 2, or 3); while the negative side degree is marked with a negative sign (−1, −2, or −3). The results of the differences between nonprofessional and professional raters in the three design cases are shown in .

Table 7. Results of the differences between nonprofessional and professional raters in the three design cases.

The Cronbach’s alpha is 0.887 for nonprofessional raters while the Cronbach’s alpha is 0.882 for professional raters. This indicated that the both types of raters have very high consistency of the results.

7. Discussion

The ratings given by nonprofessionals and professionals nearly range from −1.5 to 1.5. This small score range may cause by the zero-centered 7 point-Likert scale. Participants are less likely to give extreme scores (3 or −3). From the results, it can be seen that the identification of the positive side is at a reliable level, especially for the nonprofessional group. In the professional group, Index 1 and Index 19 were reported as a negative side, which is opposite to nonprofessionals. The two indexes are related to the definition of E-creativity, so this difference can be explained that the professional and nonprofessional raters have a different understanding of E-creativity.

By comparing the results between nonprofessional and professional raters, it can be found that the scores given by nonprofessional raters are often higher than that of professional raters. This may be because the nonprofessionals have a loose requirement on E-creativity and thus affected the rating results. The exceptional case is that professional raters give a higher score of Index 15 compared with nonprofessional raters.

Comparing the scores among eight indexes, it can be seen that Index 23 is the most important index for E-creativity from nonprofessionals’ perspectives. On the other hand, Index 15 and Index 34 are the most important index for E-creativity from professional people’s perspectives.

Although there are some score differences between professional and nonprofessional raters, the results report no statistical significance (p=0.385). This indicates that although the differences existed, the difference between professional and nonprofessional raters is not significant. When we used the developed E-creativity assessment scale, the professional levels may not affect the assessment results in a significant level. Also, it is noted that the empirical study in Section 5 is conducted based on our metrics identification and experimental determination study in Section 3. The reliablity of the studies in Section 3 may affect the results in Section 5.

8. Conclusion

This research developed an E-creativity assessment scale, in which eight indexes were promoted in the metrics identification and experimental determination studies. These indexes are related to the E-creativity attributes, pre-requirement for the existence of E-creativity, relations between exploratory process and the creativity, and E-creativity results. An empirical study was then applied to verify the difference between nonprofessionals and professionals. The results suggest that the scores given by nonprofessional raters are often higher than that of professional raters, but the difference is no significant.

Apart from promoting an E-creativity assessment scale, the results of this study also enhances understanding of E-creativity and contributes to refining the definition of E-creativity. The identified indexes can guide systematic training and assessment of E-creativity by humans or machines. Our research results reflected the scale of E-creativity in a creative process, especially in a product creation process. It is revealed that E-creativity is not simply related to the exploratory process and its concept space; it also encompasses the relationship the novelty of these processes and the concepts generated from exploratory process.

Supplemental material

Supplemental Material

Download PDF (3.1 MB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Supplementary material

Supplemental data for this article can be accessed online at https://doi.org/10.1080/21650349.2024.2319772

Additional information

Funding

The work was supported by the National Key Research and Development Program of China [2022YFB3303304]; National Natural Science Foundation of China [62207023].

References

  • Amabile, T. M. (1982). Social psychology of creativity: A consensual assessment technique. Journal of Personality and Social Psychology, 43(5), 997–1013. https://doi.org/10.1037/0022-3514.43.5.997
  • Beaty, R. E., & Johnson, D. R. (2021). Automating creativity assessment with SemDis: An open platform for computing semantic distance. Behavior Research Methods, 53(2), 757–780. https://doi.org/10.3758/s13428-020-01453-w
  • Besemer, S. (1984). How do you know It’s creative? Gifted Child Today, 7(2), 30–35. https://doi.org/10.1177/107621758400700214
  • Besemer, S., & O’Quin, K. (1986). Analyzing creative products: Refinement and test of a judging instrument. The Journal of Creative Behavior, 20(2), 115–126. https://doi.org/10.1002/j.2162-6057.1986.tb00426.x
  • Boden, M. A. (1998). Creativity and artificial intelligence. Artificial Intelligence, 103(1–2), 347–356. https://doi.org/10.1016/S0004-3702(98)00055-1
  • Boden, M. A. (2004). The creative mind: Myths and mechanisms. Routledge.
  • Boden, M. A. (2009). Computer models of creativity. AI Magazine, 30(3), 23–23. https://doi.org/10.1609/aimag.v30i3.2254
  • Childs, P. R. N., Hamilton, T., Morris, R. D., & Johnston, G. (2006). Centre for technology enabled creativity. In DS 38: Proceedings of E&DPE 2006, the 8th International Conference on Engineering and Product Design Education, Salzburg, Austria, (pp. 367–372).
  • Childs, P., Han, J., Chen, L., Jiang, P., Wang, P., Park, D., Yin, Y., Dieckmann, E., & Vilanova, I. (2022). The creativity diamond—A framework to aid creativity. Journal of Intelligence, 10(4), 73. https://doi.org/10.3390/jintelligence10040073
  • Colton, S., & Wiggins, G. A. (2012, August). Computational creativity: The final frontier? Ecai, 12(1), 21–26. https://doi.org/10.3233/978-1-61499-098-7-21
  • Cropley, D., & Cropley, A. (2005). Engineering creativity: A systems concept of functional creativity. In J. C. Kaufman & J. Baer (Eds.), Creativity across domains: Faces of the muse (pp. 169–185). Psychology Press.
  • Cropley, D., & Kaufman, J. C. (2012). Measuring functional creativity: Non-Expert Raters and the creative solution diagnosis scale. The Journal of Creative Behavior, 46(2), 119–137. https://doi.org/10.1002/jocb.9
  • Dormans, J., & Leijnen, S. (2013). Combinatorial and exploratory creativity in procedural content generation.FDG 2013: Proceedings of the fourth workshop on Procedural Content Generation for Games at the Foundations of Digital Games Conference, May 14-17, 2013, Chania, Greece. Workshop Proceedings of the 8th International Conference on the Foundations of Digital Games, pp. 1–4.
  • Ekvall, G. (1996). Organizational climate for creativity and innovation. European Journal of Work and Organizational Psychology, 5(1), 105–123. https://doi.org/10.1080/13594329608414845
  • Epstein, R., Schmidt, S. M., & Warfel, R. (2008). Measuring and training creativity competencies: Validation of a new test. Creativity Research Journal, 20(1), 7–12. https://doi.org/10.1080/10400410701839876
  • Field, A. (2013). Discovering statistics using IBM SPSS statistics. sage.
  • Forgeard, M. (2022). Prosocial motivation and creativity in the arts and sciences: Qualitative and quantitative evidence. Psychology of Aesthetics. https://doi.org/10.1037/aca0000435
  • Groeneveld, W., Van den Broeck, L., Vennekens, J., & Aerts, K. (2022, July). Self-assessing creative problem solving for aspiring software developers: A Pilot study. In Proceedings of the 27th ACM Conference on on Innovation and Technology in Computer Science Education, Dublin Ireland (Vol. 1, pp. 5–11).
  • Guckelsberger, C., Kantosalo, A., Negrete-Yankelevich, S., & Takala, T. (2021, September). Embodiment and computational creativity. In International Conference on Computational Creativity, Mexico City, Mexico.
  • Hair, J. F., Ortinau, D. J., & Harrison, D. E. (2010). Essentials of marketing research (Vol. 2). McGraw-Hill/Irwin.
  • Hazeri, K. (2019). Development and validation of a product creativity evaluation framework for the assessment of functional consumer products. Imperial College London.
  • Hoque, A. S. M. M., Siddiqui, B. A., Awang, Z. B., & Baharu, S. M. A. T. (2018). Exploratory factor analysis of entrepreneurial orientation in the context of Bangladeshi small and medium enterprises (SMEs). European Journal of Management and Marketing Studies, 3(2), https://doi.org/10.5281/zenodo.1292331
  • Horn, D., & Salvendy, G. (2006). Product creativity: conceptual model, measurement and characteristics. Theoretical Issues in Ergonomics Science, 7(4), 395–412. https://doi.org/10.1080/14639220500078195
  • Horn, D., & Salvendy, G. (2009). Measuring consumer perception of product creativity: Impact on satisfaction and purchasability. Human Factors and Ergonomics in Manufacturing & Service Industries, 19(3), 223–240. https://doi.org/10.1002/hfm.20150
  • Hung, E. C., & Choy, C. S. (2013). Conceptual recombination: A method for producing exploratory and transformational creativity in creative works. Knowledge-Based Systems, 53, 1–12. https://doi.org/10.1016/j.knosys.2013.07.007
  • Iqbal, A. (2022, May). Evidence of the transmutation of creative elements using a computational creativity approach. In 2022 IEEE 12th Symposium on Computer Applications & Industrial Electronics (ISCAIE), Penang Island, Malaysia (pp. 139–143). IEEE.
  • Jarosewich, T., Pfeiffer, S. I., & Morris, J. (2002). Identifying gifted students using teacher rating scales: A review of existing instruments. Journal of Psychoeducational Assessment, 20(4), 322–336. https://doi.org/10.1177/073428290202000401
  • Jennings, K. E., Keith Simonton, D., & Palmer, S. E. (2011). Understanding exploratory creativity in a visual domain. In Proceedings of the 8th ACM conference on Creativity and cognition (C&C ’11). Association for Computing Machinery, New York, NY, USA, 223–232.
  • Karimi, P. (2019). Studying the Impact of an AI Model of Conceptual Shifts in a Co-Creative Sketching Tool (Doctoral dissertation, The University of North Carolina at Charlotte).
  • Karimi, P., Rezwana, J., Siddiqui, S., Maher, M. L., & Dehbozorgi, N. (2020, March). Creative sketching partner: An analysis of human-AI co-creativity. In Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari Italy (pp. 221–230).
  • Kilicay-Ergin, N. H., & Jablokow, K. W. (2012). Problem-solving variability in cognitive architectures. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(6), 1231–1242. https://doi.org/10.1109/TSMCC.2012.2201469
  • Kim, S., Choe, I., & Kaufman, J. C. (2019). The development and evaluation of the effect of creative problem-solving program on young children’s creativity and character. Thinking Skills and Creativity, 33, 100590. https://doi.org/10.1016/j.tsc.2019.100590
  • Liapis, A., Martínez, H. P., Togelius, J., & Yannakakis, G. N. (2013). Transforming exploratory creativity with DeLeNoX. In 4th International Conference on Computational Creativity, ICCC 2013, Sydney, Australia (pp. 56–63). Faculty of Architecture, Design and Planning, The University of Sydney.
  • Lu, C. C., & Luh, D. B. (2012). A comparison of assessment methods and raters in product creativity. Creativity Research Journal, 24(4), 331–337. https://doi.org/10.1080/10400419.2012.730327
  • Maiden, N., Jones, S., Karlsen, K., Neill, R., Zachos, K., & Milne, A. (2010, September). Requirements engineering as creative problem solving: A research agenda for idea finding. In 2010 18th IEEE International Requirements Engineering Conference, Sydney, NSW (pp. 57–66). IEEE.
  • Marrone, R., Taddeo, V., & Hill, G. (2022). Creativity and artificial intelligence—A Student perspective. Journal of Intelligence, 10(3), 65. https://doi.org/10.3390/jintelligence10030065
  • Miller, A. L. (2014). A self-report measure of cognitive processes associated with creativity. Creativity Research Journal, 26(2), 203–218. https://doi.org/10.1080/10400419.2014.901088
  • Neihart, M. (1998). Creativity, the arts, and madness. Roeper Review, 21(1), 47–50. https://doi.org/10.1080/02783199809553930
  • Nembhard, I. M., & Lee, Y. S. (2017). Time for more creativity in health care management research and practice. Health Care Management Review, 42(3), 191. https://doi.org/10.1097/HMR.0000000000000171
  • Park, S. H., Kim, K. K., & Hahm, J. (2016). Neuro-scientific studies of creativity. Dementia and Neurocognitive Disorders, 15(4), 110–114. https://doi.org/10.12779/dnd.2016.15.4.110
  • Rayasam, S. (2016). Transformational Creativity in Requirements Goal Models (Doctoral dissertation, University of Cincinnati).
  • Rebelo, A. D. P., Inês, G. D. O., & Damion, D. V. (2022). The impact of artificial intelligence on the creativity of videos. ACM Transactions on Multimedia Computing, Communications and Applications, 18(1), 1–27. https://doi.org/10.1145/3462634
  • Rhodes, M. (1961). An analysis of creativity. Phi Delta Kappan, 42, 305–310. https://www.jstor.org/stable/20342603
  • Riedl, M. O., & Young, R. M. (2006). Story planning as exploratory creativity: Techniques for expanding the narrative search space. New Generation Computing, 24(3), 303–323. https://doi.org/10.1007/BF03037337
  • Ritchie, G. (2009). Can computers create humor? AI Magazine, 30(3), 71–71. https://doi.org/10.1609/aimag.v30i3.2251
  • Runco, M. A., & Jaeger, G. J. (2012). The standard definition of creativity. Creativity Research Journal, 24(1), 92–96. https://doi.org/10.1080/10400419.2012.650092
  • Runco, M., Plucker, J., & Lim, W. (2001). Development and psychometric integrity of a measure of ideational behavior. Creativity Research Journal, 13(3&4), 393–400. https://doi.org/10.1207/S15326934CRJ1334_16
  • Sahu, S., & Mukherjee, A. (2013). Explanations for creativity.
  • Sternberg, R. J., & Karami, S. (2022). An 8P theoretical framework for understanding creativity and theories of creativity. The Journal of Creative Behavior, 56(1), 55–78. https://doi.org/10.1002/jocb.516
  • Torrance, E. P. (1966). Torrance tests of creative thinking: Norms-technical manual. Personnel Press.
  • Van Goch, M. (2018, July). Creativity in liberal education before and after study commencement. In 4th International Conference on Higher Education Advances (HEAD’18), Valencia, Spain (pp. 1475–1483). Editorial UniversitatPolitècnica de València.
  • Walia, C. (2019). A dynamic definition of creativity. Creativity Research Journal, 31(3), 237–247. https://doi.org/10.1080/10400419.2019.1641787
  • Weiss, S., & Wilhelm, O. (2020). Coda: Creativity in psychological research versus in linguistics–same but different? Cognitive Semiotics, 13(1). https://doi.org/10.1515/cogsem-2020-2029
  • Wiggins, G. A. (2006). A preliminary framework for description, analysis and comparison of creative systems. Knowledge-Based Systems, 19(7), 449–458. https://doi.org/10.1016/j.knosys.2006.04.009
  • Yin, Y., Han, J., Huang, S., Zuo, H., & Childs, P. (2021). A study on student: Assessing four creativity assessment methods in product design. Proceedings of the Design Society: International Conference on Engineering Design (ICED21), Gothenburg, Sweden (Vol.1, pp. 263–272).
  • Yin, Y., Qin, K., Liu, H., Childs, P., Sun, L., & Chen, L. (2022, August). A study of the exploratory creativity performance between machine and human designers. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Washington, DC, USA (Vol. 86267, p. V006T06A001). American Society of Mechanical Engineers.