304
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Testing Computational Assessment of Idea Novelty in Crowdsourcing

Received 25 Apr 2022, Published online: 13 Mar 2023
 

ABSTRACT

In crowdsourcing ideation websites, companies can easily collect large amount of ideas. Screening through such volume of ideas is very costly and challenging, necessitating automatic approaches. It would be particularly useful to automatically evaluate idea novelty since companies commonly seek novel ideas. Four computational approaches were tested, based on Latent Semantic Analysis (LSA), Latent Dirichlet Allocation (LDA), term frequency – inverse document frequency (TF-IDF), and Global Vectors for Word Representation (GloVe), respectively. These approaches were used on three set of ideas and the computed idea novelty scores, along with crowd evaluation, were compared with human expert evaluation. The computational methods do not differ significantly with regard to correlation coefficients with expert ratings, even though TF-IDF-based measure achieved a correlation above 0.40 in two out of the three tasks. Crowd evaluation outperforms all the computational methods. Overall, our results show that the tested computational approaches do not match human judgment well enough to replace it.

Plain Language Summary

Creative ideas are crucial to the success of organizations and the society in general. Information and communication technologies have made it possible to collect a large number of ideas in a short time period. However, evaluating and selecting among many ideas are demanding and time-consuming. Computational methods have been developed for processing textual documents. Yet there has been insufficient comparison of various methods in evaluating the novelty of ideas generated in crowdsourcing. This study compared four computational methods for evaluating the novelty of ideas in three distinct topics. Computationally evaluated novelty scores were compared to human evaluations. Our results show that the computationally generated scores were typically positively related to human judgment. However, computational methods did not match human evaluation close enough to replace it. Possible future developments were discussed.

Acknowledgments

The authors want to acknowledge the support from the internal research funding from Kean University.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

The work was supported by the Kean University [Internal Research Grant].

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 354.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.