Publication Cover
The Design Journal
An International Journal for All Aspects of Design
Volume 27, 2024 - Issue 2
2,603
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

An application of generative AI for knitted textile design in fashion

ORCID Icon & ORCID Icon
Pages 270-290 | Received 11 Oct 2023, Accepted 15 Nov 2023, Published online: 02 Feb 2024

Abstract

In recent years, artificial intelligence (AI) in the form of generative deep learning models have proliferated as a tool to facilitate or exhibit creativity across various design fields. When it comes to fashion design, existing applications of AI have more heavily addressed general fashion design elements, such as style, silhouette, colour, and pattern, and paid less attention to the underlying textile attributes. To address this gap, this study explores the effects of applying a generative deep learning model specifically towards the textile component of the fashion design process, by utilizing a Generative Adversarial Network (GAN) model to generate new images of knitted textile designs, which were then assessed based on their aesthetic quality in a qualitative survey with over 200 respondents. The results suggest that the generative deep learning (GAN) based method has the ability to produce new textile designs with creative qualities and practical utility that facilitate the fashion design process.

Introduction

In recent years, artificial intelligence (AI) in the form of generative deep learning models has been widely applied across various design fields. With the ability to learn from vast amounts of data and generate original designs, generative deep learning models have exhibited a revolutionary potential to facilitate creative design processes by expanding or exhibiting creativity (Franceschelli and Musolesi Citation2021; Mazzone and Elgammal Citation2019). In light of this, there has been a growing exploration into the application of generative deep learning AI towards the creative fashion design process (Harreis et al. Citation2023; Luce Citation2019).

However, existing creative fashion design applications of AI have largely focused on general fashion design elements, such as style, silhouette, colour, or pattern, and lack in addressing the underlying textile attributes, such as fibre composition, textile structure, or finish, which are an integral component to the fashion design. Therefore, to address this gap, this study sets out to investigate the effects of applying a generative deep learning model towards the textile component of the creative fashion design process, which will be carried out by the following research objectives: (1) devising a method which utilizes a generative deep learning model to generate new images of textile designs, which are then (2) evaluated based on their aesthetic quality in a qualitative survey with over 200 respondents. The results suggest that the generative deep learning (GAN) based method has the ability to produce new knitted textile designs with creative qualities and practical utility that facilitate the knitwear fashion design process.

Background

Generative AI models

With the recent proliferation of the ‘Generative AI’ tools that have become commonly associated with the text and image GPT models released by OpenAI, we would like to make a point to define ‘generative AI’ as a genre and ‘generative deep learning AI models’ that is referred to throughout this study, so as not to conflate the terms.

The genre of generative AI has a history that far predates the latest development of Generative Pre-trained Transformer (GPT) models, and encompasses any AI algorithms that can generate new output content (e.g. images, text, videos, and other media) based on a given input prompt (McKinsey & Company Citation2023). Therefore, it includes the early programming-based approaches as early as the 1950–1960’s (Fernandez and Vico Citation2014; Victoria and Albert Museum Citation2016), which relied on explicit algorithmic rules to generate specific creative outputs of patterns, images, or music (Horner and Goldberg Citation1991; McCorduck Citation1991). The expansion of generative AI furthered with the advent of machine learning algorithms, which, rather than rely on explicitly programmed rules, enabled AI systems to learn intricate patterns from large datasets and generate outputs based on statistical models (Nicholson Citation2019). This breakthrough, which exceeded the capabilities of explicitly programmed systems, brought about more adaptive and data-driven generative AI models capable of generating images, music, and text (McCormack and D'Inverno Citation2012; Spratt Citation2018). However, a further advancement in generative AI came with the introduction of deep learning models—a new class of AI empowered by neural networks with multiple layers which enable it to process large complex datasets and generate complex and high-fidelity outputs (LeCun, Bengio, and Hinton Citation2015; Szegedy et al. Citation2013), thus manifesting remarkable capabilities in creative content generation. So to clarify, the term ‘generative deep learning models’ mentioned predominantly in this study, refers to deep learning models with generative AI applications, examples which include Artificial Neural Networks (ANNs) (Auerbach Citation2015), Convolutional Neural Networks (CNNs) (Wu Citation2017), and Generative Adversarial Networks (GANs) (Goodfellow et al. Citation2014).

Generative AI in fashion design

Early generative AI applications in fashion design leveraged Interactive Genetic Algorithms (IGA), a method of evolutionary computing popular for creative tasks at the time, to develop computer-aided fashion systems capable of generating basic representations of garment designs or technical patterns (Cho Citation2002; Kim and Cho Citation2000; Mok et al. Citation2013; Tabatabaei Anaraki Citation2017; Xu et al. Citation2016). While more recent studies have explored the creative fashion design potential of deep learning models (Kim, Shin, and Kim Citation2007), notably GANs (Choi et al. Citation2023; Cui et al. Citation2018; Sbai et al. Citation2018; Wu et al. Citation2020; Yang et al. Citation2021; Yuan and Moghaddam Citation2020). However, these studies have limited their focus on general fashion design elements, such as style, silhouette, colour, or pattern. Otherwise, another popular branch of studies has gravitated towards fashion product or outfit recommendations based on overall style attributes, incorporating generative deep learning methods, such as a combined Genetic Algorithm and neural network method (Bulgun et al. Citation2015), ANNs (Wang, De Haan, and Rasheed Citation2016), a combined CNN and Recurrent Neural Network (RNN) method (Li et al. Citation2017) and GANs (Hsiao et al. Citation2019; Liu et al. Citation2019; Yildirim et al. Citation2019). Likewise outside academia, explorations of generative deep learning models in fashion design have also primarily dealt with silhouette, colour, and print attributes. For example, with Project Muze (a collaboration between Google and online fashion retailer Zalando) a neural network driven ‘predictive design engine’ harnessed customer data to generate fashion designs (Rietze Citation2016), or artist Robbie Barrat’s use of neural networks trained on images of Balenciaga’s runway looks to generate new ones (Schwab Citation2018). For a comprehensive overview of the reviewed studies on generative AI applications in fashion design, see Appendix A for a Literature Review Matrix which describes each study in terms of the AI model or method adopted, the intended end-use application and the fashion design attributes addressed.

Based on the above review, existing studies of generative AI in fashion design have focused more heavily on addressing general fashion design attributes (such as style, silhouette, pattern, print, colour), and scarcely addressed the textile attributes (such as yarn quality or textile structure), a crucial component of the fashion design process. Or, as articulated by Yuan and Moghaddam (Citation2020): ‘current literature merely focuses on the generative design of “form”, disregarding other non-visual aspects associated with its “function” (e.g. architecture, materials, performance)’. The few existing studies that apply AI to address the textile component of fashion design include that of Ekárt (Citation2007) which applied evolutionary behaviour of genetic algorithms to develop knitted textile stitch design variations, or that of Karmon et al. (Citation2018) which developed a parametric computer-aided design tool to support the digital visualization, design and manufacture of knitted textiles, or that of Richards and Ekárt (Citation2010) which proposed a computer-aided design tool modelled on case-based reasoning that facilitates knitwear pattern development down to the stitch level. Although these studies come closest to textile-conscious generative AI applications for the fashion design process, they are intended more for the technical rather than the creative design aspects of that process. Hence there remains a gap of generative deep learning AI applications towards the creative fashion design process that address the attributes of its textile component, which this study intends to explore.

Methodology

Generative deep learning method

The first research objective aimed to devise a method that applies a generative deep learning model to generate new images of textiles that can facilitate the creative fashion design process. Specifically, the devised method targeted knitted textile images to serve the knitwear category of the fashion design process. Homing in on a specific fashion product category was necessary in order to adequately address the category-specific nuances of the fashion design process, thereby enabling a ‘richer characterization of creativity’ that Bown (Citation2012) refers to. But furthermore, the knit product category is characterized by very distinct and dimensional textile attributes, which are integral in shaping the fashion design outcome (Udale Citation2014), making it a good subject for this exploration.

The framework of the generative deep learning based method which was applied is visually portrayed in .

Figure 1. Flowchart of the applied generative deep learning based method.

Figure 1. Flowchart of the applied generative deep learning based method.

The method applied the generative deep learning model of StyleGAN (Karras, Laine, and Aila Citation2019)—chosen due to its exhibited ability to generate high-quality images, discern intricate image attributes, enable transfer learning in adapting pretrained models to domain-specific creative tasks—all qualities which make it well-suited for creative image generation in various artistic and design domains.

Two variations (A and B) of the generative deep learning based methodology were conducted. The breakdown of their setup are as follows:

  1. Generative deep learning method—Variation A: testing the StyleGAN models ability to generate new images resembling knitted textile designs and sufficiently pick up distinguishing attributes from the knitted textile image dataset. First test was conducted with the image dataset in RGB (coloured), and second test with the same image dataset converted to grayscale, intended to explore impact of eliminating colour from the image dataset (i.e. using 2-dimensional tensors of grayscale instead of 3-dimensional tensors of RGB) on the qualities of the generated knitted textile design images.

    • Generative deep learning model: StyleGAN (GitHub repository source code: https://github.com/NVlabs/stylegan), applied via an open-source creative machine learning platform ‘RunwayML’ (https://runwayml.com/).

    • Input: request for a new image of a knitted textile design.

    • Training/Test dataset: unsorted dataset of 1687 knitted textile images total (images pre-processed to 256 × 256 RGB JPEG format).

    • Output: new generated images resembling knitted textile designs (1024 × 1024 RGB JPEG format).

  2. Generative deep learning method—Variation B: testing the StyleGAN model’s ability to generate new images resembling knitted textiles with sufficient definition of specific design attributes (in this case, a cable stitch structure) by tailoring the training dataset to represent the desired attribute.

    • Generative deep learning model: StyleGAN (GitHub repository source code: https://github.com/NVlabs/stylegan), applied via an open-source creative machine learning platform ‘RunwayML’ (https://runwayml.com/).

    • Input: request for a new image of a knitted textile design with a cable stitch.

    • Training/Test dataset: filtered dataset of 284 images of knitted textiles with cable stitch structures (images pre-processed to 256 × 256 Grayscale JPEG format).

    • Output: new generated images resembling knitted textile designs with cable stitches (1024 × 1024 Grayscale JPEG format).

presents details of the generative deep learning based method’s inputs, outputs, and processes, with image reference examples.

Figure 2. Flowchart of applied generative deep learning based method details (variations A and B).

Figure 2. Flowchart of applied generative deep learning based method details (variations A and B).

Design of survey questionnaire

To fulfill the second research objective, the generated textile design images of the devised generative deep learning method were evaluated in terms of creative value (meaning their qualitative, aesthetic features, such as creativity, trendiness, innovation, and style) (Boden Citation2009). This evaluation took the form of a qualitative survey questionnaire, the formats and procedures of which are detailed as follows:

  • Format: online form via Google Forms

  • Participants: 211 random participants across the academic and industry networks of the Primary and Co-Investigator

  • Timing: data collection took place between November 2020 and April 2022

  • Procedure: participants were sent the survey URL link (via email, text, presentation slide with scannable QR code) and given the option to complete immediately or at their convenience.

  • Design: The survey begins with requesting the participants consent, providing the expectations and rights, as well as a link (https://drive.google.com/file/d/17cxOAkbQ12lpHmxUiTmCQc9fZxCF1iGy/view?usp=sharing) to a more detailed information sheet for reference. After consenting to the conditions, the questionnaire requests the participants profile information, including their name, contact email, gender, year of birth, and field of study or work (intended to account for any influences or biases on the later responses). Next, the questionnaire asked questions to gauge to the participant’s knitting experience (theoretical and practical) and interest (these were intended to account for any influences or biases on the later responses). The questionnaire then proceeds to five questions (see ), which require participants to review given images of knitted textiles (as swatches and as garments) and rate their visual appearance in terms of qualitative traits. Each of these questions presents 4 knitted textile or knitted garment images, two of which are photos of real existing physical knit textiles and another two which are images generated by the devised generative deep learning method, in a random order without indicating which is which (these five questions are intended to evaluate the ‘creative value’ of the AI-generated images vs. the real ones). (Triangulation: (1) When presenting the knitted textile images in the survey, to disguise the AI generated images alongside photos of real knitted textiles, and reduce distractions, the presented images were of similar colours or like-colours were shown in equal quantities. Furthermore, the presented images were formatted (e.g. cropped, framed, and aligned) uniformly. (2) To catch any response biases, some of the adjective descriptors were listed again in the same question, but as a negative. To ensure that both positive and negative versions of the question were answered alike. (3) Questions 1–3 were based on knitted textile images, whilst Questions 4–5 were based on knitted garment images (by projecting knitted textile images onto clothed avatars), to see if the mode of presentation of the knit textile design had an effect.)

  • Location (active URL link): https://forms.gle/6viypo5Xjfm9NuwZA

Figure 3. Screenshot of survey questions.

Figure 3. Screenshot of survey questions.

Results and discussion

Results of the generative deep learning method

The devised generative deep learning method generated images recognized as knitted textile designs, with perceptible visual attributes (such as stitch, gauge, and hand-feel), that furthermore demonstrated as adequate visual reference for factory knitting technicians to translate into corresponding physical knitted swatch samples. shows a selection of the images output by the generative deep learning method (both Variations A and B) which appeared most definitive of knitted textile designs.

Figure 4. Selection of images output by the generative deep learning method (variations A and B).

Figure 4. Selection of images output by the generative deep learning method (variations A and B).

The majority of output generated images were definitive enough to be interpreted as representing knitted textiles. The generated images captured recognizable knitted textile technical attributes: presents some of the generated images grouped by the knitted textile attribute which they resemble.

Figure 5. Selection of images output by the generative deep learning method (variations A and B), grouped by the knitted textile attribute which they resemble.

Figure 5. Selection of images output by the generative deep learning method (variations A and B), grouped by the knitted textile attribute which they resemble.

There were also some output generated images, examples presented in , which have an indistinct and distorted quality and are overall indiscernible as a knitted textile design.

Figure 6. Selection of images output by the generative deep learning method (variations A and B) considered visually ambiguous.

Figure 6. Selection of images output by the generative deep learning method (variations A and B) considered visually ambiguous.

A selection of the generated images from were then translated into physical form. They were sent to industrial knitting factory technicians who interpreted them by programming a knitting machine to produce their corresponding physical knitted textile swatch samples. shows the results of the generated images, respectively above their physical knitted swatch renditions.

Figure 7. Generated images (top row) above their corresponding physical knitted swatch interpretations (bottom row).

Figure 7. Generated images (top row) above their corresponding physical knitted swatch interpretations (bottom row).

One of the purposes of conducting Variation B of the generative deep learning method using only grayscale images in the dataset was to investigate if a colourless dataset would impact (perhaps enhance) other attributes of the generated images. However, the results show that there was little impact; the images generated from Variation B’s grayscale dataset did not exhibit any notable difference in detail or definition from those generated from Variation A’s RGB dataset. Furthermore, the colours of the images generated from Variation A’s RGB dataset appeared muted and like a filter overlay instead of assimilated into the details of the knitted textile design. Images output from Variation B of the generative deep learning method were representative of cable stitches, however as shown in , their level of realism varied, with some exhibiting a surreal, albeit interesting, quality.

Results of the survey questionnaire

The responses to the qualitative survey questionnaire, designed to evaluate their aesthetic quality, are presented in the following discussion and figures.

The survey acquired a total of 211 participant responses. The collective profile of these 211 participants is summarized in , showing the distribution of their genders, ages, fields of study or work, years of knit experience, hands-on knitting ability, theoretical knit understanding, and level of interest in knitwear—based on how they responded to the respective profile questions.

Table 1. Distribution of survey participants profile responses.

Analyzing this profile distribution, it is apparent that the gender of the participants was mostly female, and in the 18–24 age range. This is representative of the fact that a great deal of participants were students at the fashion school of the Investigators, which is also reflected in the field of work/study distribution. This skewed distribution was pre-empted and believed not to be a deterring factor to the results. In fact, the fashion related background may prove beneficial to the integrity of the responses. Knitwear specific expertise was not predominant, encouraging a more balanced perspective on the images presented.

The responses to Question 1 indicate a consensus towards the aesthetic or style of a knitted textile design, with responses weighed towards some swatches over others. There was a particularly strong consensus towards Classic as a descriptor. The purpose of Question 1 was to provide a validation for the proceeding questions, that there is statistical significance in evaluating images of knitted textile designs based on subjective aesthetic or style descriptors. The responses to Questions 2, 4, and 5 are presented in —which indicates the percentage of participants that selected each respective response, and was conditionally formatted with a heat map colour-coding corresponding to the percentage (the higher the percentage the ‘warmer’ the colour; the lower the percentage ‘cooler’ the colour), to enable easy interpretation. Note that Questions 2, 4, and 5 were structured in the same way, designed for participants to rate each of the four knitted textile swatch or garment options based on:

  • three positive statements (‘MOST CREATIVE/INTERESTING’, ‘MOST FASHIONABLE/STYLISH’, and ‘MOST LIKELY TO BUY A SWEATER IN THIS’), towards the left side

  • three negative statements (‘LEAST CREATIVE/INTERESTING’, ‘LEAST FASHIONABLE/STYLISH’, and ‘LEAST LIKELY TO BUY A SWEATER IN THIS’), towards the right side

Figure 8. Graphical representation of survey responses (Questions 2, 4, 5).

Figure 8. Graphical representation of survey responses (Questions 2, 4, 5).

In , on the left side are labels indicating which of the knitted textile image options in each question was based on a real swatch (‘REAL’) or the generative deep learning method generated image (‘AI’). This distinction was hidden from the participants. The responses indicate first of all that the generative deep learning generated images were judged with the same level of discernment as the real swatch photos.

To furthermore help gauge the level of positivity or negativity the generated images received vs. the real images, a scoring system was implemented, displayed in as well, whereby the respective response percentages of each knitted textile swatch/garment option were treated as integers and ‘totalled up’ by adding up the percentages of the positive statements and subtracting the percentages of the negative statements. The totalled or ‘BALANCED SCORE’ for each knitted textile swatch/garment option is shown in the right-hand columns, and also colour coded according to the same heat map colour key. Then the amalgamation of all the ‘BALANCED SCORE’s’ for Questions 2, 4, and 5 is shown at the bottom of , which also indicates overall how the generated images scored compared to the real images. Based on this scoring system of , the generated images collectively scored +212 points, while the real images collectively scored −211. This is a stark contrast and strong validation for the generative deep learning method generated images.

The responses to Question 2 indicate that the two generative deep learning generated images (Swatch 1 and 3) were most attributed to the three positive statements of ‘MOST CREATIVE/INTERESTING’, ‘MOST FASHIONABLE/STYLISH’, and ‘MOST LIKELY TO BUY A SWEATER IN THIS’. Likewise, the two real knitted textile images were most attributed to the three negative statements of ‘LEAST CREATIVE/INTERESTING’, ‘LEAST FASHIONABLE/STYLISH’, and ‘LEAST LIKELY TO BUY A SWEATER IN THIS’. This suggests a clear potential of the knitted textile design images generated by the generative deep learning method.

The responses to Question 3 showed a clear consensus towards the interpretation of hand feel based on the knitted textile images, indicating that both generated and real images of knitted textiles have sufficient visual dimensionality and finesse to convey the subtlety of their textural elements to a notable extent.

Questions 4 and 5 are the male and female garment counterparts of the same question. In these questions, although the generated images did not receive the most votes for the positive statements, they were among the top selected. The real knitted textile images happen to receive the most negative statements. This overall points to the fact that the generative deep learning method can produce knitted textile design images comparable to existing real knitted textiles. The data is not sufficient to confirm that the designs generated by the generative deep learning method were better per say, but there is a strong enough indication that they were considered to be in the same territory of aesthetic quality as the real knitted textile images.

Conclusion

This study set out to investigate the effect of applying a generative deep learning based method to the textile component of the creative fashion design process by carrying out two key objectives: (1) devising a generative deep learning based method that generates new images of textile designs, and (2) evaluating the aesthetic quality of the generated images by conducting a qualitative survey. The outcomes of this study point to the potential creative value and practical utility of incorporating generative deep learning in the textile design component of the fashion design process. More specifically, the outcomes of each research objective include: (1) a generative deep learning (StyleGAN model-based) method capable of generating new knitted textile design images, with distinguishable physical attributes (such as stitch, gauge, and hand-feel), that proved to be sufficient visual reference for factory knitting technicians to interpret into corresponding physical swatch samples, and (2) evaluative survey responses indicating the same level of discernment towards the physical and aesthetic attributes of the generated knitted textile images as the real ones. Then when projected onto garments, the ones based on the generated knitted textile images were rated overall more creative, fashionable, and buyable than ones based on the real knitted textile images.

The outcomes of this study have both practical and theoretical significance. In terms of practical significance, the results of this study validate that if the devised generative deep learning method was applied in a real-world fashion industry context, it could potentially: (a) augment creativity of designers by providing data-driven textile design inspiration and alleviating non-creative tasks, (b) increase operational efficiency by reducing workload, (c) be digitally compatible for an increasingly digital fashion design process, (d) foster sustainability by reducing waste and sampling needs, (e) reduce development lead-times and costs, (f) inform customer needs and trends by learning from the dataset. A practical way in which the devised generative deep learning method could manifest in a fashion industry context is as a digital knitted swatch design generator—a kind of assistive creative design tool that supports swatch or sample design and development in the knitwear fashion or knitted products design process, by providing new images of knitted textile designs to fuel design creativity based on a specific request (e.g. a specified stitch, or colour).

This potential practical application of generative deep learning for the fashion and textiles industry could greatly benefit the creative designers, process, and product. It demonstrates the significances of integrating generative AI at a ‘deeper’, more pre-emptive, and elemental level of the textile, yarn, and someday maybe even the fibre, which could mean eventually enabling data-driven algorithms to inform and hone fashion designs to more accurately and efficiently meet real-time demands, thereby improving profits and minimizing loss of resources. Secondly, the idea of digitally generated knitted textile images (replacing the need to knit physical swatches) is consistent with the fashion industry’s growing reliance on digital design and data management tools, and the increasing digitization of formerly physical design articles (e.g. samples, patterns, and sketches), and also can be applied to virtual model renderings for visualizations. Anywhere digital can replace the physical, especially in a convolutive and iterative design process, such as the knitwear fashion design process (Petre, Sharp, and Johnson Citation2006), would help contribute to reducing workload and waste.

In terms of theoretical significance, this study presents a novel application of a GAN based generative deep learning model towards the uniquely spatial attributes of knitted textiles. This study contributes insight into whether generative deep learning based AI can exhibit, facilitate, or enhance creativity, specifically in the nuanced and specialized knitwear design process.

Limitations of the study

Several key limitations are noted in this study. In terms of the generative deep learning method applied, there is opportunity to investigate the effects of tuning the dataset and model variables. Specifically, to further refine the computational model performance by optimizing certain variables, such as the size of the dataset, the data pre-processing technique, the type of model, and the model settings (e.g. number of training epochs, activation function). The training dataset contained a total of 1687 images, which were then sorted by different knit textile attributes, resulting in smaller training datasets per attribute. Therefore, expanding the size of the training dataset, as well the number of different attribute classes, would also be beneficial. A limitation of using the StyleGAN model is that the output generated does not stray far from the learned attributes of the dataset, therefore the output tends to reflect the dataset and therefore lacks novelty. So, there is an opportunity to refine the adopted model in a way that increases the novelty factor of the generated output. Also, this study targeted the knit and knitwear category of fashion to use as the subject for experimentation, therefore it would be informative to expand the testing towards, for example, woven textiles, as a comparison.

While this study performed a qualitative evaluation of the output of the applied generative deep learning method, a quantitative evaluation to assess the quality of the generated images is lacking, such as one based on computational image evaluation metrics [e.g. Frechet Inception Distance (FID) or Inception Score (IS)].

In terms of the qualitative survey, its method and design could be improved to reduce potential bias. The number of survey questions was relatively limited, therefore increasing the quantity and variety of questions could have offered further insights into the reaction towards generative deep learning generated images. The real and generated images presented in the survey still varied slightly in colour (despite the attempt to use images of similar colours per question), which poses a risk for bias selection based on colour preference and not purely on the textile attributes. A more adamant effort should be made to reduce any potentially distracting features in the images. The results of the survey could also be further analyzed for potentially meaningful correlations in terms of how the profile of the participant (e.g. age, knitting experience, field of work/study) affects the response to the generated vs. real knitted textile images. Additionally, to more accurately gauge the practical and creative implications of the devised generative deep learning method, it would be essential to subject it to a real-world creative fashion design practice situation, and furthermore be able to quantify the impact on lead-time, cost, or material consumption.

Ethical approval

The survey questionnaire in this study was conducted in accordance with the approved application for ‘Ethical Review for Teaching/Research Involving Human Subjects’ by the Hong Kong Polytechnic University Institutional Review Board on 25 March 2022 (Ref. # HSEARS20220323006).

Statement of informed consent

Informed consent of the participants was obtained before conducting the survey (via https://forms.gle/6viypo5Xjfm9NuwZA, with the full consent agreement detailed here: https://drive.google.com/file/d/17cxOAkbQ12lpHmxUiTmCQc9fZxCF1iGy/view?usp=sharing), including permission to use their responses towards future research/publications. Any personal details collected from participants are kept strictly confidential by the authors.

Supplemental material

Supplemental Material

Download MS Word (65.1 KB)

Acknowledgements

The authors would like to especially thank Ms. Cally Kwong Mei Wan for her continued support of this project.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The authors confirm that the image datasets and survey data that support the findings of this study are available from the authors upon reasonable request.

Additional information

Funding

This work was financially supported by the Hong Kong General Research Fund (Project No. 15602323).

Notes on contributors

Xiaopei Wu

Xiaopei Wu is a PhD graduate and postdoctoral fellow in the School of Fashion & Textiles at The Hong Kong Polytechnic University. Previously, she obtained a BSc in Fiber Science & Apparel Design at Cornell University and has 10 years of professional experience working in the fashion industry, managing knitwear product development and production at global companies. Her current research interests lie in the application of digital tools and methodologies for fashion & textile creative design processes.

Li Li

Li Li is a professor in the School of Fashion & Textiles at The Hong Kong Polytechnic University, an Associate Director of the PolyU Academy for Interdisciplinary Research and is on the Board of Directors of The Hong Kong Research Institute of Textiles and Apparel Limited (HKRITA). Prior to her academic experience, she acquired many years of practical experience as a senior designer and eventually design director in the knitwear fashion industry. Her research focuses on applying concepts of creative economy, design thinking, and interdisciplinary design methods towards the smart functional textile technologies and advanced manufacturing processes.

References

  • Auerbach, D. 2015. “Do Androids Dream of Electric Bananas?” Slate, Bitwise, July 23. https://slate.com/technology/2015/07/google-deepdream-its-dazzling-creepy-and-tells-us-a-lot-about-the-future-of-a-i.html
  • Boden, M. 2009. “Computers and Creativity Models and Applications.”
  • Bown, O. 2012. “Generative and Adaptive Creativity: A Unified Approach to Creativity in Nature, Humans and Machines.” In Computers and Creativity, edited by J. McCormack and M. d’Inverno, 361–381. Heidelberg, Germany: Springer. https://doi.org/10.1007/978-3-642-31727-9_14
  • Bulgun, E., T. Ince, C. Guzelis, and A. Vuruskan. 2015. “Intelligent Fashion Styling Using Genetic Search and Neural Classification.” International Journal of Clothing Science and Technology 27 (2): 283–301. https://doi.org/10.1108/IJCST-02-2014-0022
  • Chen, H., L. Shen, M. Wang, X. Ren, and X. Zhang. 2021. “Innovative Design of Traditional Calligraphy Costume Patterns Based on Deep Learning.” Journal of Physics: Conference Series 1790 (1): 012029. https://doi.org/10.1088/1742-6596/1790/1/012029
  • Cho, S. B. 2002. “Towards Creative Evolutionary Systems with Interactive Genetic Algorithm.” Applied Intelligence 16 (2): 129–138. https://doi.org/10.1023/A:1013614519179
  • Choi, W., S. Jang, H. Y. Kim, Y. Lee, S. Lee, H. Lee, and S. Park. 2023. “Developing an AI-Based Automated Fashion Design System: Reflecting the Work Process of Fashion Designers.” Fashion and Textiles 10 (1): 39–17. https://doi.org/10.1186/s40691-023-00360-w
  • Cui, Y. R., Q. Liu, C. Y. Gao, and Z. Su. 2018. “FashionGAN: Display Your Fashion Design Using Conditional Generative Adversarial Nets.” Computer Graphics Forum 37 (7): 109–119. https://doi.org/10.1111/cgf.13552
  • Ekárt, A. 2007. “Evolution of Lace Knitting Stitch Patterns by Genetic Programming.” Proceedings of the 9th Annual Conference Companion on Genetic and Evolutionary Computation, London, United Kingdom. https://doi.org/10.1145/1274000.1274010
  • Fan, J., J. Fan, and L. Hunter. 2009. “14 – Applications of Artificial Intelligence in Fabric and Garment Engineering.” In Engineering Apparel Fabrics and Garments, 361–382. Cambridge, UK: Woodhead Publishing. https://doi.org/10.1533/9781845696443.361
  • Fernandez, J. D., and F. Vico. 2014. “AI Methods in Algorithmic Composition: A Comprehensive Survey.” arXiv e-prints, arXiv:1402.0585. Accessed February 1, 2014. https://ui.adsabs.harvard.edu/abs/2014arXiv1402.0585F.
  • Franceschelli, G., and M. Musolesi. 2021. “Creativity and Machine Learning: A Survey.” ArXiv, abs/2104.02726.
  • Goodfellow, I. J., J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio. 2014. “Generative Adversarial Networks.” ArXiv, abs/1406.2661.
  • Harreis, H., T. Koullias, R. Roberts, and K. Te. 2023. Generative AI: Unlocking the Future of Fashion. Düsseldorf, Germany: McKinsey & Company.
  • Horner, A., and D. E. Goldberg. 1991. Genetic Algorithms and Computer-Assisted Music Composition. Vol. 51. Ann Arbor, MI: Michigan Publishing, University of Michigan Library.
  • Hsiao, W., I. Katsman, C. Wu, D. Parikh, and K. Grauman. 2019. “Fashion++: Minimal Edits for Outfit Improvement.” 2019 IEEE/CVF International Conference on Computer Vision (ICCV). https://doi.org/10.1109/ICCV.2019.00515
  • Isola, P., J.-Y. Zhu, T. Zhou, and A. A. Efros. 2017. “Image-to-Image Translation with Conditional Adversarial Networks.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
  • Karmon, A., Y. Sterman, T. Shaked, E. Sheffer, and S. Nir. 2018. “KNITIT: A Computational Tool for Design, Simulation, and Fabrication of Multiple Structured Knits.” Proceedings of the 2nd ACM Symposium on Computational Fabrication, Cambridge, MA, USA.
  • Karras, T., S. Laine, and T. Aila. 2019. “A Style-Based Generator Architecture for Generative Adversarial Networks.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4396–4405. https://doi.org/10.48550/arxiv.1812.04948
  • Karras, T., T. Aila, S. Laine, and J. Lehtinen. 2018. “Progressive Growing of GANs for Improved Quality, Stability, and Variation.” Cornell University Library, arXiv.org. http://arxiv.org/abs/1710.10196.
  • Karras, T., S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila. 2020. “Analyzing and Improving the Image Quality of Stylegan.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
  • Kaspar, A., T.-H. Oh, L. Makatura, P. Kellnhofer, J. Aslarus, and W. Matusik. 2019. Neural Inverse Knitting: From Images to Manufacturing Instructions. San Diego, CA: ICML.
  • Kato, N., H. Osone, K. Oomori, C. Ooi, and Y. Ochiai. 2019. “GANs-based Clothes Design: Pattern Maker Is All You Need to Design Clothing.” AH2019 ACM International Conference Proceeding Series, New York, NY, USA.
  • Kim, H. S., and S. B. Cho. 2000. “Application of Interactive Genetic Algorithm to Fashion Design.” Engineering Applications of Artificial Intelligence 13 (6): 635–644. https://doi.org/10.1016/S0952-1976(00)00045-2
  • Kim, N. Y., Y. Shin, and E. Y. Kim. 2007. “Emotion-Based Textile Indexing Using Neural Networks.” Human-Computer Interaction. HCI Intelligent Multimodal Interaction Environments, Berlin, Heidelberg, Germany.
  • Li, Y., L. Cao, J. Zhu, and J. Luo. 2017. “Mining Fashion Outfit Composition Using an End-to-End Deep Learning Approach on Set Data.” IEEE Transactions on Multimedia 19 (8): 1946–1955. https://doi.org/10.1109/TMM.2017.2690144
  • Liu, L., H. Zhang, Y. Ji, and Q. M. Jonathan Wu. 2019. “Toward AI Fashion Design: An Attribute-GAN Model for Clothing Match.” Neurocomputing 341: 156–167. https://doi.org/10.1016/j.neucom.2019.03.011
  • LeCun, Y., Y. Bengio, and G. Hinton. 2015. “Deep Learning.” Nature 521 (7553): 436–444. https://doi.org/10.1038/nature14539
  • Lomov, I., and I. Makarov. 2019. “Generative Models for Fashion Industry Using Deep Neural Networks.” 2019 2nd International Conference on Computer Applications & Information Security (ICCAIS). https://doi.org/10.1109/CAIS.2019.8769486
  • Luce, L. 2019. Artificial Intelligence for Fashion: How AI is Revolutionizing the Fashion Industry. New York, NY: Apress.
  • Mao, X., Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley. 2019. “On the Effectiveness of Least Squares Generative Adversarial Networks.” IEEE Transactions on Pattern Analysis and Machine Intelligence 41 (12): 2947–2960. https://doi.org/10.1109/TPAMI.2018.2872043
  • Mazzone, M., and A. Elgammal. 2019. “Art, Creativity, and the Potential of Artificial Intelligence.” Arts 8 (1): 26. https://doi.org/10.3390/arts8010026
  • McCorduck, P. 1991. Aaron’s Code: Meta-Art, Artificial Intelligence, and the Work of Harold Cohen. New York, NY: W.H. Freeman.
  • McCormack, J., and M. D'Inverno. 2012. Computers and Creativity. Heidelberg, Germany: Springer. https://doi.org/10.1007/978-3-642-31727-9
  • McKinsey & Company. 2023. What Is Generative AI? San Francisco, CA: McKinsey & Company.
  • Mok, P. Y., J. Xu, X. X. Wang, J. T. Fan, Y. L. Kwok, and J. H. Xin. 2013. “An IGA-Based Design Support System for Realistic and Practical Fashion Designs.” Computer-Aided Design 45 (11): 1442–1458. https://doi.org/10.1016/j.cad.2013.06.014
  • Nicholson, C. 2019. “Artificial Intelligence (AI) vs. Machine Learning vs. Deep Learning.” Pathmind Inc. Accessed April 2, 2020. https://pathmind.com/wiki/ai-vs-machine-learning-vs-deep-learning.
  • Petre, M., H. Sharp, and J. Johnson. 2006. “Complexity through Combination: An account of Knitwear Design.” Design Studies 27 (2): 183–222. https://doi.org/10.1016/j.destud.2005.07.003
  • Richards, P., and A. Ekárt. 2010. “Hierarchical Case-Based Reasoning to Support Knitwear Design.” CIRP Journal of Manufacturing Science and Technology 2 (4): 299–309. https://doi.org/10.1016/j.cirpj.2010.06.002
  • Rietze, A. 2016. “Project Muze: Fashion Inspired by You, Designed by Code.” The Keyword | Google blog. https://blog.google/around-the-globe/google-europe/project-muze-fashion-inspired-by-you/.
  • Särmäkari, N., and A. Vänskä. 2021. “‘Just Hit a Button!’ – Fashion 4.0 Designers as Cyborgs, Experimenting and Designing with Generative Algorithms.” International Journal of Fashion Design, Technology and Education 15 (2): 211–220. https://doi.org/10.1080/17543266.2021.1991005
  • Sbai, O., M. Elhoseiny, A. Bordes, Y. LeCun, and C. Couprie. 2018. “DeSIGN: Design Inspiration from Generative Networks.” ArXiv, abs/1804.00921.
  • Schwab, K. 2018. “This AI Designs Balenciaga Better than Balenciaga.” Fast Company. https://www.fastcompany.com/90223486/this-ai-designs-balenciaga-better-than-balenciaga.
  • Spratt, E. L. 2018. “Computers and Art in the Age of Machine Learning.” XRDS: Crossroads, the ACM Magazine for Students 24 (3): 8–20. https://doi.org/10.1145/3186697
  • Szegedy, C., W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. 2013. “Intriguing Properties of Neural Networks.” arXiv:1312.6199. Accessed December 1, 2013. https://ui.adsabs.harvard.edu/abs/2013arXiv1312.6199S.
  • Tabatabaei Anaraki, N. A. 2017. “Fashion Design Aid System with Application of Interactive Genetic Algorithms.” In Computational Intelligence in Music, Sound, Art and Design: Proceedings of the 6th International Conference EvoMUSART 2017, Amsterdam, The Netherlands. Cham, Switzerland: Springer International Publishing.
  • Udale, J. 2014. Fashion Knitwear. Laurence King Publishing. http://ebookcentral.proquest.com/lib/polyu-ebooks/detail.action?docID=1876180.
  • Victoria and Albert Museum. 2016. A History of Computer Art. London: Victoria and Albert Museum. Accessed April 1, 2020. http://www.vam.ac.uk/content/articles/a/computer-art-history/.
  • Wang, H., J. De Haan, and K. Rasheed. 2016. “Style-Me – An Experimental AI Fashion Stylist.” In Trends in Applied Knowledge-Based Systems and Data Science. IEA/AIE 2016. Lecture Notes in Computer Science, Vol. 9799, 553–561. Cham, Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-42007-3_48
  • Wang, X., M.-X. Tang, and J. Frazer. 2001. “Creative Stimulator: An Interface to Enhance Creativity in Pattern Design.” Artificial Intelligence for Engineering Design, Analysis and Manufacturing 15 (5): 433–440. https://doi.org/10.1017/S089006040115506X
  • Wu, J. 2017. “Introduction to Convolutional Neural Networks.” In National Key Lab for Novel Software Technology. Vol. 5, 23. Nanjing: Nanjing University.
  • Wu, Q., B. Zhu, B. Yong, Y. Wei, X. Jiang, R. Zhou, and Q. Zhou. 2020. “ClothGAN: Generation of Fashionable Dunhuang Clothes Using Generative Adversarial Networks.” Connection Science 33 (2): 341–358. https://doi.org/10.1080/09540091.2020.1822780
  • Xiao, Z., X. Liu, J. Wu, L. Geng, Y. Sun, F. Zhang, and J. Tong. 2018. “Knitted Fabric Structure Recognition Based on Deep Learning.” The Journal of the Textile Institute 109 (9): 1217–1223. https://doi.org/10.1080/00405000.2017.1422309
  • Xu, J., P. Y. Mok, C. W. M. Yuen, and R. W. Y. Yee. 2016. “A Web-Based Design Support System for Fashion Technical Sketches.” International Journal of Clothing Science and Technology 28 (1): 130–160. https://doi.org/10.1108/IJCST-03-2015-0042
  • Xu, Q., L. Hubin, Y. Liu, and S. Wu. 2021. “Innovative Design of Intangible Cultural Heritage Elements in Fashion Design Based on Interactive Evolutionary Computation.” Mathematical Problems in Engineering 2021: 1–11. https://doi.org/10.1155/2021/9913161
  • Yan, H., H. Zhang, L. Liu, D. Zhou, X. Xu, Z. Zhang, and S. Yan. 2023. “Toward Intelligent Design: An AI-Based Fashion Designer Using Generative Adversarial Networks Aided by Sketch and Rendering Generators.” IEEE Transactions on Multimedia 25: 2323–2338. https://doi.org/10.1109/TMM.2022.3146010
  • Yang, C., Y. Zhou, B. Zhu, C. Yu, and L. Wu. 2021. “Emotionally Intelligent Fashion Design Using CNN and GAN.” Computer-Aided Design & Applications.
  • Yildirim, G., N. Jetchev, R. Vollgraf, and U. Bergmann. 2019. “Generating High-Resolution Fashion Model Images Wearing Custom Outfits.” 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 3161–3164. https://doi.org/10.1109/ICCVW.2019.00389
  • Ying, W., and L. Zhengdong. 2019. “Intelligent Creative Design of Textile Patterns Based on Convolutional Neural Network.” In Advances in Intelligent, Interactive Systems and Applications. IISA 2018: Advances in Intelligent Systems and Computing, Vol. 885, 210–215. Cham, Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-030-02804-6_28
  • Yuan, C., and M. Moghaddam. 2020. “Attribute-Aware Generative Design with Generative Adversarial Networks.” IEEE Access 8: 190710–190721. https://doi.org/10.1109/ACCESS.2020.3032280