943
Views
0
CrossRef citations to date
0
Altmetric
Production & Manufacturing

Procedural content generation method for creating 3D graphic assets in Digital Twin

ORCID Icon, &
Article: 2216859 | Received 26 Jan 2023, Accepted 18 May 2023, Published online: 06 Jun 2023

Abstract

The concept of Digital Twin was invented not only for a static virtual mimic of reality, but also means an evolving digital copy which could stay consistent with its physical counterpart. The life-cycle management of a product or a system could not be completed without dynamic feature, while conventional methods to create graphic assets especially 3D models for Digital Twins are still limited. In this paper, Procedural Content Generation is introduced as a potential technique which could reduce difficulty for the creation and modification of 3D graphic assets of Digital Twins. A trial project and two validation cases are executed to verify the proposed method.

1. Introduction

1.1. Research motivation

As a concept which arose in 2003 for product life management (Grieves, Citation2014), Digital Twin has become quite prevalent in recent years. Although there are many definitions from both academia and industry (Sørensen et al., Citation2022), some characteristics are still widely approved as its critical features, including a digitalized copy which couples then synchronizes with an asset which existed or exists physically for monitoring, simulating, testing, and forecasting (Jeršov, Citation2020; Glaessgen & Stargel, Citation2012), while the latter one could be either a product or a system. In many cases, this virtual mimic is several 3D graphic assets. They are the essential ground of the entire Digital Twin as the visual representation of a complex product or system (Naserentin et al., Citation2022). Therefore, a precise, even ultra-realistic 3D model is necessary (Glaessgen & Stargel, Citation2012).

Unfortunately, a critical issue is still very limited reported, which is: how to create 3D models in a digital twin that can fulfill this requirement? Although there is a production process proposed with 3D data converted from layered 2D information delivered by GIS services in previous studies (Aheheroff et al., Citation2021), statistic result still showed that industrial users prefer 3D models as the fundamental of display for Digital Twin. Compared with dashboards of data, 3D representations are more intuitive and “visually stimulating”. It is also admitted that 3D graphics is a sensible way to layout and convey lots of information in Digital Twins (Dertien & McMahon, Citation2022),

According to the development sequence of a real asset and its virtual version, conventional solutions could be categorized into two types, Digital Twin Instance which is built based on an existing object, and Digital Twin Prototype which is prepared before or without a physical twin (Sørensen et al., Citation2022). Figures and Figure show their production processes.

Figure 1. Development of a Digital Twin Instance

Figure 1. Development of a Digital Twin Instance

Figure 2. Development of a Digital Twin Prototype

Figure 2. Development of a Digital Twin Prototype

The creation of a Digital Twin Instance starts with data collection. For visualization, this step usually includes photo shooting and/or 3D scanning with some devices. The figure above demonstrated a production procedure began with photos. Many photos were taken from different perspectives of target statue then imported into photogrammetry tool as 3D scanning references, to generate a high-resolution 3D mesh. The generated 3D mesh is re-topologized into a low-resolution 3D model and mapped with painted 2D textures as the Digital Twin of the original statue.

The creation of a Digital Twin Prototype begins virtually. Data such as CAD files of corresponding physical counterpart are essential. However, this type of digital copy is not suitable for direct application as a Digital Twin, because polygon meshes generated by CAD tools include too much information and post-processing is needed, as pointed out in previous studies (Eyre & Freeman, Citation2018). CAD software is also not a viable choice for Digital Twin visualization due to its lack of capacity for real-time data feeding (Eyre & Freeman, Citation2018).

As a result, rebuilding 3D models based on these design files is necessary as the second step. Once a new model is completed and polished with colorful textures, it is ready for exporting-importing into platforms which support the development of real-time interactions. 3D game engines such as Unity and Unreal Engine are used by researchers due to their performance on presenting realistic visuals and potential for the creation of various features (Sørensen et al., Citation2022; Jeršov, Citation2020; Eyre & Freeman, Citation2018). As free tools for educators and students (Unity Technologies, Citation2023; Epic Games, Citation2023), their capability of multi-platform deployment is also helpful (Epic Games, Citation2023; Unity Technologies, Citation2023). The example in the figure above selected Unity as its development platform.

Current methods to create 3D models for a Digital Twin could be simplified into a linear workflow, as illustrated in Figure .

Figure 3. Conventional methods to develop Digital Twin

Figure 3. Conventional methods to develop Digital Twin

Data such as 3D scanning results and CAD files are collected as references, based on which simplified 3D models are built and transferred to real-time development platforms for the creation of more behaviors attached to this set of graphic assets via programming. Then all graphics and codes are compiled and distributed to the devices of target users of this Digital Twin.

After the development of various use cases, the disadvantages of this workflow have been experienced and pointed out by different researchers.

The first issue is related to development cost. It heavily depends on time, labor and knowledge in specific domains such as 3D modelling for the creation of a nice Digital Twin (Naserentin et al., Citation2022; Sørensen et al., Citation2022). However, researchers’ major tasks are supposed to be multiple tests with this Digital Twin, to simulate what-if scenarios then provide constructional suggestions (Azfar et al., Citation2022).

The second concern is the maintenance issue of Digital Twins after they are created. As mentioned in the definition of this concept, life-cycle management of a product or a system is a potential application of Digital Twins (Grieves, Citation2014), which highlights the importance of synchronization between digitalization and reality. Therefore, modification to 3D models may happen quite frequently in the application, while it might be hard to make these changes due to lack of time and specific knowledge.

Last but not least, the Digital Twin created usually process data communication through interactivity development and model deployment by displaying and updating data around 3D models. A typical example is the Digital Twin Prototype demonstrated in Figure . It recorded the construction progress of a residential building by hiding and showing 3D models floor by floor, item by item. When to display which mesh is controlled by the program, and the appearance of this virtual tower is synchronized with the construction progress updated by the engineering team of this building in reality. Users can’t adjust any 3D mesh after the deployment of this Digital Twin, even if this kind of modifications is necessary sometimes, e.g., when there is a shortage of specific fit-out props such as curtain wall mullions and new items of different designs are supplied. Programmers can’t tweak 3D meshes in this case, although it might be a common case in reality. As a result, data communication between virtuality and reality is limited in this legacy production workflow, which has potentially broken more synchronization between physical assets and its digital replica.

To sum up, current production procedure of 3D models in Digital Twin is hard to stay dynamic as its physical counterpart especially when the latter one changes frequently. The deficiency of current solution is, how to make these 3D meshes in Digital Twins alive?

1.2. Applications of PCG

Procedural Content Generation, or PCG for short, has a long history in computer game industry due to its low-cost-high-deliver feature. It’s viewed as a break-through tool for both hardware bottleneck of gaming devices such as small hard disk or RAM, and financial constraints of development budget, which demands more output but with less workforce in a shorter duration.

Early applications of PCG in this domain focused on 2D mazes and planar dungeons (Lipinski et al., Citation2019). Platform games such as Super Mario Bros are hot topics among PCG researchers, with many papers that contributed different algorithms to improve its PCG (Gao et al., Citation2022).

3D contents’ creation has been investigated increasingly in the first two decades of 21st century as the upgrading of computer hardware during this period. A prevalent approach is the investigation on the generation of grand terrain (Latif et al., Citation2022) even a complex urban environment with roads and buildings (Azfar et al., Citation2022), as part of a racing game or an open-world adventure in the virtual space (Gao et al., Citation2022). No Man’s Sky, a game released in 2016, featured PCG as its core and generated 18 quintillion planets with different ecosystems in a digital universe (Murray, Citation2014).

PCG also has some applications in Digital Twin development. In 2018, several researchers combined this technique with flooding hydraulic model by generating random urban environment as the simulation area of flooding based on a set of configurable parameters (Mustafa et al., Citation2018).

2. Methods

To address concerns caused by conventional methods to create 3D models for Digital Twin, a few new requirements have been raised, such as:

  1. To save workforce on maintenance, automation or semi-automation should be enabled for modifications.

  2. To save time, it should be easy to change then fast to iterate.

  3. To save development cost, it is crucial to reuse existing graphic assets. Consequently, new method should also be compatible with current tools.

As a technique which has been applied in game development industry since 1980s (Lipinski et al., Citation2019), Procedural Content Generation (PCG) is a potential solution.

PCG also has multiple versions of definition in academia and industry. Some researchers described it as a technique (Cook, Citation2022; Lipinski et al., Citation2019), while some argued that it should be a system (Kim et al., Citation2019). Some common features are shared among these definitions, including:

  1. automation or semi-automation enabled. As an opposition of manual creation, PCG works based on pre-defined rules or algorithms and could feedback autonomously according to users’ inputs (Gao et al., Citation2022).

  2. parameters/attributes configuration. These parameters are part of the pre-defined rules or algorithms, inviting users to input to complete a loop of content generation (Mustafa et al., Citation2018). They could be a random digit within a specific range such as an integer within 1 to 100, a string of descriptive words, or a rasterized image. Compared with time and knowledge needed to update an existing Digital Twin in traditional ways, modification via PCG costs less time.

  3. consuming less resources while generating more. Compared with traditional graphic assets, PCG files are usually of smaller size because they are just a collection of rules and constraints.

  4. randomness and unpredictability. Upon all features mentioned above, the assets generated by PCG are usually surprising and the variation of its outputs has been applied in game development to increase replay-ability (Cook, Citation2022).

2.1. New workflow with PCG

With PCG tools to create and manage graphic assets, Digital Twin’s production workflow could be updated as Figure :

Figure 4. Method with PCG to create Digital Twin

Figure 4. Method with PCG to create Digital Twin

All references and graphic assets created in legacy ways are reused as resources to feed PCG tools, which work as a Data Hub in the new workflow. After rules or algorithms are defined properly with these graphic assets, a live link is built up to connect this Data Hub with a Development Hub for data synchronization. Developers then could spend more time in this Development Hub, adding more real-time interactivity for end users while modifying graphic assets via parameters that had been attached on them and exposed by Data Hub for fast iterations. They can deploy this Digital Twin on end devices afterward.

Moreover, those parameters from Data Hub could be exposed to end users via coding in Development Hub, which enable users’ direct modification to graphic assets. PCG re-generates output automatically based on these new inputs according to rules and algorithms predefined and embedded in Data Hub, then run behaviors pre-setup in Development Hub, e.g., multi-system simulation. Finally, a new result is demonstrated visually.

3. A trial project developed with PCG

To test this new method, a trial project is designed and executed. The campus map of the University of Hong Kong (Estates Office,Citation2023) is used as its raw reference, and the final output expected is a simplified 3D scene generated via PCG tools.

The aim of this project is to verify whether the procedure suggested above is:

  1. with both automation and manual input enabled;

  2. with assets’ parameters modularized and reconfigurable;

  3. compatible with current tools and assets;

In this project, three different levels of PCG are tested:

  1. Completely procedural for mountains, with user’s mouse/tablet input as paint strokes which define the density and scale of mountains that locate behind campus;

  2. Semi-procedural for roads, ground and buildings, with rasterized custom floor-plan images to define their shapes;

  3. Procedural-manual for trees, with rasterized floorplan image to define its area and a prefabricated 3D model of a tree to cite and place randomly inside this area;

For software and hardware applied in this trial project, please refer to following Table :

Table 1. Software applied with version info and computer hardware specification

The entire production procedure is summarized as Figure .

Figure 5. Trial project production workflow

Figure 5. Trial project production workflow

The original campus map is split into four layers for the generation of four types of graphic assets with different PCG techniques to process in the Data Hub.

The first layer is for roads and ground. 3D model of roads is extruded based on the non-transparent part of a rasterized image of 1024 × 1024 pixels, while the ground is built upon its transparent part. With this image as a custom manual input, PCG ran following tasks autonomously: the creation of 3D meshes, their UV unwrapping and the assignment of different materials to various meshes, such as roads, curbs, and ground.

The second layer is for green zones, where tree’s 3D models are instantiated. Like the first layer, green zones are created based on non-transparent pixels of a rasterized image drawn according to campus map. Randomized points are generated on top of these meshes as location reference of tree’s 3D model, which is prepared beforehand with traditional 3D content creation tools included in the conventional production workflow.

The third layer is for buildings. A low-poly 3D model of a general building façade is created via PCG tool, and its layout location reference points are generated according to buildings’ shape which is defined by the input of floorplan image and the number of floors each building has, which is a user-input parameter for each building.

The fourth layer is for mountains. Different from three layers above, terrain shape has little information in the original campus map then there is plenty of space for users’ free-style input via mouse or tablet. Information such as a default value of height, scale and density of each mountain is pre-defined, with specific ranges to enable user’s reasonable customization.

All these four layers of assets are merged in the Development Hub with multiple parameters exposed by Data Hub for developers’ random modification without switching back to tools used in the Data Hub.

For the first layer of roads-ground model, UV tiling parameters for roads, curbs and ground are exposed for quick adjustment of seamless texture mapping on these meshes.

For the second layer for green zones with trees, because trees’ locations are generated randomly, then parameters such as force total count and global seed are exposed for a better control over randomized results.

For the third layer with buildings models, parameters such as number of floors and façade materials of each building are exposed due to some practical concern. Because this campus was built upon a slope then has run for over 100 years, many buildings displayed in this trial are aged enough for renovation projects, which probably cause changes to the numbers of their floors and materials applied on their façades. Rooftop meshes of each building will be relocated on top of floors automatically after any update of floor numbers due to pre-defined rules in Data Hub.

For the fourth layer of mountains, which invites developers to paint as base of procedurally generated terrains, many parameters are exposed for a complete user manipulation such as stroke scale, scattering density along a paint curve, and custom terrain textures applied on different areas (e.g., base, rock, and grass) with their corresponding UV tiling controls.

All parameters mentioned above could be exposed to end users via further development, as briefly described at the end of previous section. After that, end users could tweak these parameters to directly influence 3D meshes used in Digital Twin, which enables a backward loop from reality to its corresponding cyber copies. If the feedback routine completes within an affordable duration, there will be a living model which can learn even grow with its physical twin as expected in Digital Twin (Parris, Citation2020).

This downstream-upstream loop between Digital Twin developers and users could be explained with the third layer of this trial project, the one for buildings as a detailed example. Supposing a building in a campus is needed to be renovated, a project is planned and executed by university states Office. If the construction plan is visualized via this Digital Twin, end users such as architect, engineers and construction workers could create, modify and update their design, plan and progress frequently via changing parameters exposed to them, such as number of floors, different materials to be applied to the façade, even the construction design of the façade. The 3D model of this component is also created in a procedural way. The appearance of this building will be re-calculated and rendered out on display devices every time when an end user makes modification to it.

This type of data synchronization may start from design phase then last throughout the entire renovation project and continue even after it ends up. The latest Digital Twin model could be handed over to university Estates Office and keeps being updated by the latter one, to monitor building’s daily usage continuously as part of its life-cycle management.

In this case, Digital Twin plays a vivid role consistently in the entire production line of an architecture-construction project. Its openness to 3D models, flexibility and capability of rapid re-configuration via exposed parameters is brought out by PCG techniques.

4. Two validation cases

To examine this new method in more practical contexts, two validation case studies are planned and implemented.

4.1. Validation case 1

The first case is to re-create a Digital Twin Instance for the statue displayed in Figure , but with new production procedure and PCG tools. The entire process is shown in Figure :

Figure 6. Development of a Digital Twin Instance with new production procedure

Figure 6. Development of a Digital Twin Instance with new production procedure

For software packages applied in this case, please refer to Table :

Table 2. Validation case 1: software applied with version info

The first two steps are the same for both figures as they are about the collection of information (the Resource stage in Figure ). The main difference concentrates on the last two steps.

In Figure , conventional production procedure is completely manual to restore as many details as possible both geometrically and graphically. It costs 33 working days to finish the entire work, and the final output is of photo-realistic quality.

Compared with the result demonstrated in Figure , the graphic quality for the final deliverable generated by new method in Figure is not sufficient as 4–5 working days have been spent for its re-creation, including the time spent on the development of custom PCG tools to re-topologize scanned meshes then assign materials which are also generated procedurally.

In this validation case, the proposed new method sacrificed graphics fidelity partially to gain a much shorter production duration. It implied that in practical circumstances of Digital Twin projects, the new method could be applied to general and/or numerous items which could be recognized a flexible distance, such as graphic assets for the environment and background.

4.2. Validation case 2

Case 2 is about the creation of a Digital Twin Instance of a twisted camphor tree which is located in the main campus of the University of Hong Kong. Different from stable artifacts such as a statue and buildings, plants are more dynamic with more individualized features such as shapes of their trunks, branches, twigs and vines around. These randomized characteristics are obstacles for the creation of trees with legacy methods, but leaving some space for the performance of new method with PCG tools.

The production procedure of a Digital Twin Instance for this special tree is summarized in Figure .

Figure 7. New production procedure with PCG tools for a digital tree

Figure 7. New production procedure with PCG tools for a digital tree

For tools and platforms applied in this validation case, please refer to Table .

Table 3. Validation case 2: software applied with version info

In this case, resources are photos shot around that camphor tree. All images are sorted out based on distance to the target: close-ups to barks and leaves are categorized as texturing references then sent to 2D PCG tool for the generation of seamless textures, while wide shots and medium shots are labeled as references overall for the generation of 3D models of this tree in 3D PCG tool, because they provided more details on the shape of split trunk and big branches.

In the final output, both of them are combined in the Development Hub with a wind zone which interacts with branches and leaves to generate subtle animation of these tiny details.

In the development of this case, PCG tools showed their advantage on processing a great number of randomized items at different levels. Due to self-similarity on shapes of plants along their growth (Prusinkiewicz & Lindenmayer, Citation2004), a tree could be viewed as a hierarchical structure of several mathematical functions which iterates themselves conditionally. The production process of a virtual tree in new method with PCG tools are quite similar as how the reality performs, and consequently generated a satisfying result within 5 working days, including the time spent on resources collection.

5. Discussions

Based on what has been achieved in the trial project and two validation cases, a comparison between conventional method and proposed method could be seen in Table .

Table 4. Comparison of current solution and solution with PCG

PCG’s benefit, especially its improvement of efficiency, is evidential in the development progress of all graphic assets demonstrated in early sections of this paper. Through various modifications with parameters exposed, the proposed approach could iterate graphic assets’ creation process repetitively until an ideal output is generated without touching any tool or software applied in previous production steps. This easiness and quickness could be inherited from developers to end users even after the development of a specific Digital Twin ends, which address issues caused by conventional production methods.

For further research, there is a phenomenon which appeared in the late half of both trial project and validation cases, and might become problematic as project complexity grows. It is found that it takes longer duration to generate a new asset as the number of parameters exposed rises, or the complexity of the rules ascends. To ensure real-time performance, PCG parameters need to be exposed prudently. More suggestions on how to optimize generation rules are necessary.

Another potential research topic is to compare this production procedure with PCG to other SOTA methods via quantitative indicators of performance such as computation time, to figure out a more detailed guideline on the selection of technical solutions for various Digital Twin projects. Different research teams have diverse expectation on how fast and dynamic their ideal Digital Twin shall be. Some might prefer an instant synchronization with rough 3D models while some intend to choose fine graphics with less frequency of update or longer duration to render the latest 3D models accurately. Comparison among different production methods provides a clear roadmap for all potential developers of Digital Twin in future.

As a solution featuring efficiency, new method with PCG are capable in updating 3D graphics quickly and dynamically. This characteristic satisfies potential needs on sustainability and resilience meeting Industry 5.0 goals (Müller, Julian; European Commission, Directorate-General for Research and Innovation, Citation2020). More practical applications could be developed based on this new procedure.

Previous research on PCG’s applications also mentioned several potential issues.

First of all, more control over its random output is required because repetitive results might be generated then reduce the replay-ability of games (Latif et al., Citation2022; Lipinski et al., Citation2019; Mustafa et al., Citation2018).

Secondly, an auto-correctness check is essential to ensure consistency between virtuality and reality (Latif et al., Citation2022), particularly for PCG tools applied in the development based on trustworthy references from the reality, such as terrains used in urban planning and simulation projects.

Last but not least, how to utilize free open resources with free PCG tools to develop Digital Twins is crucial for researchers from academia (Azfar et al., Citation2022). Some trials have been initialed, such as Simon Verstraete’s demonstration of city building with layered data from Open Street Map (OSM) (Verstraete, Citation2020), and the procedure suggested by a research team from the Department of Civil Engineering, University of Texas, at El Paso for the creation of a digitalized campus which enabled vehicle autopiloting and traffic simulation with Google Maps and OSM (Azfar et al., Citation2022). For more academic works related to Digital Twin in future, more free tools, open resources, and shared procedures like these are indispensable and deserve more investigations.

6. Conclusion

The contributions of this paper are summarized as follows.

Firstly, it introduced Procedural Content Generation (PCG) concept and related tools as a compatible supplementary of conventional production procedure for 3D graphic assets in Digital Twins. It could change current workflow from a linear process into an iterative loop. With this updated procedure, live-synchronization even growth of Digital Twin might be achieved, as what its physical partner does in reality.

Secondly, this new procedure is examined in the development of multiple Digital Twins for verification. Both advantages expected and potential issues have been explained with practical suggestions.

For further studies, two directions deserve more research: the generation of more dynamic Digital Twins with PCG tools such as animals and humanoids, and the optimization of PCG algorithms for better performance.

For the first topic, the creation of dynamic 3D assets especially humanoids, randomization of behaviors and interactions among these virtual characters might be massive then heavy for computation thus difficult to synchronize. It emphasizes the importance of the second topic, the optimization issue of PCG algorithms even the entire Digital Twin system which applied PCG techniques. More theoretical investigation and implementation of practical cases are necessary.

Acknowledgments

Authors would like to thank Environment and Conservation Fund (ECF) (128/2021), Public Policy Research Funding (2022.A8.129.22C), ITF project (PRP/068/20LI), RGC Research Impact Fund (R7036-22) and RGC Theme-based Research Scheme (T32-707-22-N). Special acknowledgement will be given to Chun Wo Development Holdings Limited.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Notes on contributors

Yaqi Dai

Yaqi Dai is a PhD student in The Department of Industrial and Manufacturing Systems Engineering, The University of Hong Kong. Before starting her PhD studies, she was a 3D technical artist who had been working on multiple projects developed with Virtual Reality, Augmented Reality and Mixed Reality technologies. Her current research interest is also firmly related to these project experiences, about real-time rendering of computer graphics based on modern game engines such as Unity and Unreal Engine.

Ray Y. Zhong

Ray Y. Zhong is an assistant professor in The Department of Industrial and Manufacturing Systems Engineering, The University of Hong Kong. He was a lecturer in The Department of Mechanical Engineering, University of Auckland, New Zealand, from 2016-2019. His research interests include Internet of Things (IoT)-enabled manufacturing, Big Data in manufacturing & SCM and data-driven APS. He has published over 160 papers in international journals and conferences. Ray was ranked by Clarivate Analytics in the top 1% worldwide by citations in 2020 and 2021. He is a member of HKIE, ASME (USA), IET (UK), IEEE (USA) and LSCM HK.

Henry Y.K. Lau

Henry Y.K. Lau is currently the Head of Cybernetics and Lead Technologist at RACE, UK Atomic Energy Authority. He is an honorary associate professor, and was the Associate Dean (Innovation) in Engineering and Warden of University Hall, at the University of Hong Kong. He graduated from the University of Oxford with a BA in Engineering Science and DPhil in Robotics. He was a Croucher Foundation Fellow working in the Oxford Robotics Research Group. He is also a college lecturer at Brasenose College, Oxford.

References