387
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Estimating residential buildings’ energy usage utilising a combination of Teaching–Learning–Based Optimization (TLBO) method with conventional prediction techniques

, , , &
Article: 2276347 | Received 07 Aug 2023, Accepted 23 Oct 2023, Published online: 31 Oct 2023

Abstract

Among the most significant solutions suggested for estimating energy consumption and cooling load, one can refer to enhancing energy efficiency in non-residential and residential buildings. A structure's characteristics must be considered when estimating how much heating and cooling is required. To design and develop energy-efficient buildings, it can be helpful to research the characteristics of connected structures, such as the kinds of cooling and heating systems needed to ensure sui interior air quality. As an important part of energy consumption and demand of buildings, the assessment of cooling load conditions from the envelope of large buildings has not been comprehensively understood yet. In the present paper, a new conceptual system has been developed to anticipate cooling load in the sector of residential buildings. Also, the paper briefly describes the major models of the developed system to maintain continuity and concentrate on the prediction model of the cooling load. To predict cooling load, authors have modelled two methods of artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) in conjunction with teaching-learning-based optimization (TLBO). This article aims to illustrate how artificial intelligence (AI) approaches play an essential role in addressing the mentioned necessity and help estimate the optimal design parameters for various stations. The value of the multiple determination coefficient is also determined. The values of the training R2 (coefficient of multiple determination) are 0.96446 and 0.97585 for TLBO-MLP and TLBO-ANFIS in the training stage and 0.95855 and 0.9721 in the testing stage, respectively, with an unknown dataset which is acceptable. The training RMSE values for TLBO-MLP and TLBO-ANFIS are 0.0685 and 0.11176 for training and 0.07074 and 0.12035 for testing, respectively, for the unknown dataset, which is acceptable. The lowest RMSE value and the higher R2 value indicate the favourable accuracy of the TLBO-MLP technique. According to the high value of R2 (97%) and the low value of RMSE, TLBO-MLP can predict residential buildings’ cooling load.

1. Introduction

As it does across the globe, energy is one of the most crucial issues, which continues to experience industrial expansion and development (Tao, Aldlemy, et al., Citation2023). In this regard, research into energy sustainability, low-energy structures, and building efficiency has expanded in recent years, notably in the wake of the energy crisis of the 1970s (Chaiyapinunt et al., Citation2005; Dalamagkidis et al., Citation2007; Farhanieh & Sattari, Citation2006; Marks, Citation1997; Ochoa & Capeluto, Citation2009; Pedersen et al., Citation2008; Reppel & Edmonds, Citation1998; Sayigh & Marafia, Citation1998; Singh et al., Citation2009; Synnefa et al., Citation2007; Tzikopoulos et al., Citation2005; Yang et al., Citation2000). As a result, efficient energy resources are a crucial problem given the rising energy demand brought on by developing technology and growing human requirements (Deng et al., Citation2023). In Turkey, a significant component of the total energy consumption – about 40% – comes from buildings, particularly residential ones (Aksoy & Inalli, Citation2006). Therefore, achieving energy efficiency in homes is a pressing need (Li et al., Citation2023). If not, utilities such as heating, cooling, and lighting systems will correlate to a significant portion of the comfort conditions of interior areas (Khedher et al., Citation2023). In this scenario, using too much energy will contribute to the warming of the earth, the usage of fuels, pollution of the air, and a significant load on the national economy and consumers.

Traditionally, two main approaches have been used to gauge a building's thermal comfort (Tao, Alawi, et al., Citation2023). These are the adaptable models derived from field observations and the thermal balance model backed by laboratory investigations (Yao et al., Citation2009). Artificial intelligence approaches are often used in conjunction with these strategies. The two primary categories of building CL’s air conditioning forecast technologies are data-driven and physical simulation. Using the physical simulation technique, software like TRNSYS (Al-Saadi & Zhai, Citation2015) and ENERGY PLUS (Anđelković et al., Citation2016) primarily anticipate the cooling demand. However, the approach is much more suited for CL forecast in buildings since utilising the software mentioned above for forecasting necessitates a specified amount of competence from the operator (Nazari et al., Citation2023). The weather conditions, tenant activities, and intricate relationships between the building's systems will all strongly affect the calculation and simulation software in older structures (Qiang et al., Citation2015). The data-driven strategy is based on the building's previous operational data (Ansari Manesh et al., Citation2023). Most recent research on cooling load forecast builds intricate nonlinear correlations among input parameters and cooling loads using artificial neural networks (Deb et al., Citation2016; Shirvani et al., Citation2023) and support vector machines (Koschwitz et al., Citation2018).

Utilising a machine learning strategy, Luo et al. (Citation2020) created a multi-objective method for multiple energy usage in new buildings. Three models of support vector systems, long-term and temporary memory neural networks, and artificial neural networks were utilised to forecast the energy usage of the building. The forecast outcomes have shown that the ANN-based forecast methods for building energy consumption had the fewest moderate absolute percentage errors (Adnan et al., Citation2023). Artificial neural networks, support vector systems, and linear regression were utilised by (Li & Yao, Citation2020) to create five samples of machine learning for load forecast. The findings demonstrated that the model's predicted cooling load had a standardised mean absolute error and a standardised mean squared error of less than 4%. Four backpropagation neural networks were constructed by (Kim et al., Citation2020) to examine how input variables like building occupancy and environmental conditions impact building energy usage. The outcomes of predicting the ANN sample utilising the method of Levenberg-Marquardt were found to be more exact when the implementation of the four samples was compared. As previously indicated, the researchers built support vector machines and artificial neural networks-based cooling load prediction samples. The input parameters strongly linked to the prediction models are retained after correlation analysis. Artificial neural networks, however, struggle with local downsizing and slow convergence speed in real-world applications (Shi, Citation2023). The downside of support vector machines when developing a CL forecast sample is that they analyze data slowly. Researchers have optimised the model structure to address these issues and increase prediction model accuracy (Zhu et al., Citation2023).

Huang and Li (Citation2021) employed the ant colony technique to enhance the neural network to create a load prediction sample. The modified model's moderate absolute percentage error was decreased by 73.28%. The artificial neural network was improved using the elephant swarm improvement approach by (Moayedi et al., Citation2020). The outcomes demonstrated that the EHO-MLP method might replace the conventional model to forecast building cooling demand. The neural network was optimised by (Zhou et al., Citation2020) using the particle swarm and artificial bee swarm techniques, respectively. The cooling load prediction samples’ accuracy was estimated utilising the coefficient of determination R2, mean absolute error MAE and root means square error RMSE. The results demonstrated that the particle swarm and artificial bee colony algorithms might increase the accuracy of the cooling load forecast model.

Additionally, the PSO method outperformed the ABC algorithm regarding the prediction model's performance. The load prediction sample's accuracy may also be somewhat increased by calibration techniques used to adjust the prediction sample. Qiang et al. (Citation2015) used an enhanced multivariate linear regression sample to forecast the typical daily cooling request for an office building. In order to calibrate the starting load forecast findings using the reference day, Sun et al. (Citation2013) identified the most related meteorological data. They used its hourly projections to construct a simple online cooling load prediction sample. Finally, utilising the mistakes from the previous two forecasts, the calibrated load prediction model's accuracy was increased.

ANFIS is often utilised in several technical fields in literature. Using a model with a mean absolute error below 2.2%, Mellit et al. (Citation2009) could predict the daily solar radiation information and mean monthly clearness index in remote places. Subasi et al. (Citation2009) introduced a novel method based on an ANFIS technology for anticipating the crucial element that leads to concrete cracking in the early stages of cement hydration, and they were pleased with the results. Alasha’ary et al. (Citation2009) illustrated the accuracy of ANFIS in forecasting by using a technique that is based on neuro-fuzzy to forecast the temperature of four distinct rooms built with various construction components utilised to construct Australian residential structures. Ying and Pan (Citation2008) used the ANFIS to anticipate regional electricity demand and compared the outcomes with those from other approaches. They discovered that the ANFIS produces more accurate findings than those from other approaches. With a moderate absolute error of 0.03%, Singh et al. (Citation2007) determined that ANFIS was the best prediction approach among several neural networks with varied training functions. An ANFIS model was created by (Das & Kishor, Citation2009) to forecast the heat transfer coefficient while distilled water is boiling in a pool. Ayata et al. (Citation2007) used simulated data from a package programme to forecast indoor maximum and average air velocities utilising an ANFIS model. An inferential sensor sample employing ANFIS modelling was created by (Jassar et al., Citation2009) to estimate the average air temperature in space heating technologies. The preceding cases make clear that a variety of artificial intelligence (AI) models were used to make predictions regarding the EPB. Although hybrid and fuzzy logic-based models are still in development, how they might be applied to simulate the HLs and CLs of housing constructions is unknown. Furthermore, it is still difficult to discover comprehensive research equivalent to current soft computing techniques. Additionally, the statistical analysis on the data generated by the models has not yet undergone a thorough evaluation.

Economically and environmentally, creating a realistic method for thermal load modelling is useful. This research aims to provide architects and design engineers with information regarding the cooling loads of energy-efficiently built buildings in light of those mentioned above, predict the cooling and heating load using metaheuristic algorithms and determine the accuracy of these algorithms. Metaheuristic optimisers, such as artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) in conjunction with teaching-learning-based optimization (TLBO), are being assessed to discover whether they can aid in determining the CL. The best approach is offered after the tasks after are compared. Building cooling energy were calculated for the training and test data sets using a finite difference transient state, a one-dimensional heat conduction issue. This study provides a customised technique based on the teaching-learning-based optimization (TLBO) learning paradigm to forecast the cooling load. For this study, the aspects of 768 buildings were gathered. The information is then trained utilising the TLBO–ANFIS and TLBO–ANN. Three performance criteria are used to evaluate the results of these methods, and they demonstrate how well this method predicts the heating demand of residential structures.

The remaining portions of the article are structured as follows: The dataset and case study are both discussed in Section 2. The strategies and procedures that were utilised are described in Section 3. The simulation and numerical outcomes are presented in the following section, and the work is brought to a close in the following section.

2. Established database

Tsanas and Xifara (Citation2012) produced the dataset utilised in this study. The factors that distinguish one structure from another include the glazing area distribution, glazing area, and orientation. Eighteen prototype cubes with equivalent materials were used to replicate each structure. To ensure that the materials utilised for per of the 18 components were equivalent for various types of construction, the most recent and most popular components in the building structure business were chosen. Four different glazing types were employed in the design procedure (Figure ), with percentages of the floor region of 10%, 25%, 40%, and without glazing.

Figure 1. The preparation of data with a graphical view.

Figure 1. The preparation of data with a graphical view.

Furthermore, it was supposed that the structures were in Greece, in Athens. The data consists of 768 specimens, each with eight characteristics (x1, x2, … , x8, and y1) that serve as decision factors and are given in Table  (Le et al., Citation2019; Tsanas & Xifara, Citation2012). This study uses the properties above as decision variables to predict y1 as the cooling demand. Even though the dataset was created through simulation, it is noteworthy that the suggested approaches also work with real-world datasets.

Table 1. Input and output data of the research.

Figure  shows the current database's bar chart and the input variables’ normalised data range. Bar charts are a type of data visualisation commonly used to display categorical data. They consist of bars of varying lengths or heights, with each bar representing a specific category or group and the length or height of the bar representing the quantity or value associated with that category. Bar graphs are useful for comparing the values of different categories or groups, making them an effective way to communicate trends or patterns in data. They are often used in scientific research, business, finance, and many other fields to clearly and concisely present data. According to this figure, overall height contains a wide range of data [−1, 1], and the smallest range [−4, 0.2] is related to wall area. Accordingly, Roof area [−0.6, 1], Glazing area [−0.4, 1], Orientation [−0.6, 0.6], Surface area [−0.4, 0.6], Glazing area distribution [−0.4, 0.6], Relative compactness [−0.7, 0.2], and Wall area [−0.4, 2] have the highest to lowest ranges, respectively. Figure  demonstrates the histogram and variation of input variables. Histograms are a graph used to display the distribution of a continuous variable. They are commonly used in data analysis and statistics to visualise the frequency distribution of a dataset. Histograms consist of bars, each representing a range of measured variable values. The height of the bar represents the frequency or count of observations that fall within that range. The bars are typically drawn adjacent to each other, with no gaps between them, to emphasise the continuity of the data. Histograms can be created using various software tools or programming languages, such as Excel, Python, R, or MATLAB. The target values (cooling load) are classified into three classes. Class I ranges between 12.38 and 23.3, class II ranges between 23.3 and 35.66, and class III ranges between 35.67 and 48.04. Figure (a) shows the variation of surface area, relative compactness, wall area, and roof area. Figure (b) shows the orientation variation, glazing area distribution, overall height, and glazing area variables. Figure  also shows the variation of input variables two by two.

Figure 2. The normalised variables range with a graphical view. (a) Relative compactness, Surface area, Wall area, and Roof area; (b) Overall height, Orientation, Glazing area, and Glazing area distribution.

Figure 2. The normalised variables range with a graphical view. (a) Relative compactness, Surface area, Wall area, and Roof area; (b) Overall height, Orientation, Glazing area, and Glazing area distribution.

Figure 3. The variation of input variables.

Figure 3. The variation of input variables.

Figure 4. The variation of input variables.

Figure 4. The variation of input variables.

Figure  illustrates the Andrews plot description of the input layers and output. The common Andrews plot is organised as a Fourier interpolations series of the coordinates of multi-dimensional data points. Points that are near in some metrics have equivalent Fourier interpolations and, therefore, will tend to gather in the Andrews plot. Thus, the Andrews plot is an informative and graphical tool for collecting and other data analytic issues. The Andrews plot’s weakness is that the lowest frequencies’ coordinates will be veritably dominant and may give misleading perceptions. Visualising multivariate data is a hard but interesting issue. Scatterplots let us see data demonstrated in three or two dimensions. However, multivariate data visualisation in more than three dimensions is more difficult. Wegman and Shen (Citation1993) discuss various tools of multivariate visualisation. The two most interesting ones are the Andrews plot presented by (Andrews, Citation1972) and the grand tour presented by (Asimov, Citation1985).

Figure 5. The input layers and output’s Andrews plot description.

Figure 5. The input layers and output’s Andrews plot description.

3. Methodology

Modelling and forecasting tasks were completed using ANN and machine learning, effective data mining tools (Haykin, Citation2009; Moradzadeh & Khaffafi, Citation2017). To anticipate the load/energy, a linear mapping among the building characteristics and the building's CL was created in this study using MLP, ANFIS, and TLBO as three application samples of these algorithms. Each of the suggested techniques is briefly discussed in the section that follows.

3.1. K-Fold cross-validation

Cross-validation is an approach utilised for recognising the performance of a classifier when categorising new task examples. One repetition of cross-validation includes dividing a data sample into two separate subsets: the first is the classifier’s training on one subset (train set), and the second is testing the performance of the train set on the other subset (testing set).

The original specimen is partitioned randomly into k subspecies in k-fold cross-validation. From the subspecies of k, a single subspecies is maintained as the validation data to test the classifier. The residual k−1 subspecies are utilised as the training data. The cross-validation procedure is then iterated k times, with each k subspecies utilised once as the testing dataset. The k outcomes of the folds are averaged to produce a single performance evaluation.

Cross-validation is the topic of various studies; three interesting and related outcomes are presented below:

  • Repeating the cross-validation iterations asymptotically converges to an accurate evaluation of classifier performance (Stone, Citation1977);

  • Ten-fold cross-validation is better than leave-one-out validation for selecting the method, and it's also better than other options of k-fold (Kohavi, Citation1995);

  • K-fold cross-validation is willing to under-evaluate the classifier's performance (Kohavi, Citation1995).

3.2. Artificial intelligence methods

Multilayer Perceptron (MLP) and Adaptive Neuro-Fuzzy Inference Systems (ANFIS) are powerful computational techniques that have demonstrated capabilities in solving complex optimisation problems in the state-of-the-art. Here are some of their key capabilities:

  • Nonlinear modelling: MLPs and ANFIS can capture nonlinear relationships between input variables and output responses. This allows them to effectively model and optimise complex systems exhibiting nonlinear behaviour, often in real-world problems.

  • Universal approximation: MLPs can approximate any continuous function to arbitrary accuracy given a sufficient number of neurons and appropriate training. This property makes MLPs versatile and capable of modelling a wide range of complex optimisation problems.

  • Adaptive learning: MLPs and ANFIS can automatically adjust their internal parameters, such as weights and biases, through the learning process. This adaptive learning capability allows them to continuously refine their models to improve performance and optimise the objective function.

  • Parallel processing: MLPs and ANFIS can be trained and executed parallel, taking advantage of modern parallel computing architectures and technologies. This enables efficient processing of large-scale optimisation problems and accelerates the optimisation process.

  • Robustness to noise and uncertainty: MLPs and ANFIS can handle noisy and uncertain data by learning from the available information and generalising patterns. They can effectively deal with incomplete or imperfect data, making them suitable for optimisation problems where the input data may have inherent uncertainties.

  • Multi-objective optimisation: MLPs and ANFIS can be extended to handle multi-objective optimisation problems, where multiple conflicting objectives must be simultaneously optimised. Various techniques, such as incorporating multiple outputs or incorporating evolutionary algorithms, can be used to tackle multi-objective optimisation tasks.

  • Feature extraction and reduction: MLPs and ANFIS can automatically extract relevant features from complex input data, reducing the problem's dimensionality. This feature extraction capability helps identify important patterns and reduce the optimisation task's computational complexity.

  • Online and real-time optimisation: MLPs and ANFIS can be trained and deployed in online or real-time optimisation scenarios, where the optimisation process continuously adapts to changing conditions and dynamic environments. This makes them suitable for applications that require adaptive and responsive optimisation.

It's important to note that applying MLPs and ANFIS in solving complex optimisation problems depends on the specific problem domain and the availability of appropriate training data. Proper model design, training, and validation procedures are critical to ensuring their effectiveness and accuracy in solving state-of-the-art optimisation problems.

3.2.1. Multilayer perceptron (MLP)

A modelling solution for complicated systems in estimation issues such as engineering, medicine, and finance is an artificial neural network (ANN) (Moayedi & Jahed Armaghani, Citation2018; Nguyen et al., Citation2020; Shariati et al., Citation2021; Yan et al., Citation2019; Zandi et al., Citation2018; Zhao et al., Citation2020; Zhao et al., Citation2021). The ANN is a data-processing analysis system that resembles the structure and functions of the human brain (Wang et al., Citation2022). The ANN is a compressed connected multilayer structure including various neurons (Cui et al., Citation2022). These kinds of networks can identify the likenesses, especially when these networks are presented with new input parameters after exactly predicting the proposed output pattern (Luo et al., Citation2022). The ANN can substitute for complicated statistical analysis approaches, e.g. multivariable regression, trigonometric, autocorrelation, and linear regression (Meng et al., Citation2022; Shariat et al., Citation2018).

Each neuron in a layer of the MLP is linked to every other neuron in the layer above and below it (Dai et al., Citation2023). Figure  depicts the architecture of the MLP structure and highlights the nonlinear mapping between the input and output vectors (Moradzadeh & Pourhossein, Citation2019). Weights link the neurons, and a nonlinear transfer process produces the output signals (Seo & Eo, Citation2019). (1) Y=f(b+l=1Nwixi)(1) X and Y exist as the input and output signals, correspondingly, in Equation (1). The nonlinear transfer function is demonstrated by F, while b and w are the vectors of bias and weight. Given that MLP can be trained to learn, a database with known output and input vectors is needed to train the vector of weight, which is then changed according to the output signals (Thimm & Fiesler, Citation1997).

Figure 6. Structure of multilayer perceptron (MLP).

Figure 6. Structure of multilayer perceptron (MLP).

3.2.2. The adaptive neuro-fuzzy inference system (ANFIS)

Jang developed the ANFIS, which combines the most beneficial aspects of fuzzy systems with neural networks (Jang, Citation1992). The structure of ANFIS is made up of if-else statements, fuzzy input-output data pairs, and neural network learning algorithms. An approach for simulating complicated nonlinear mappings using neural network learning and fuzzy inference techniques is called an ANFIS (Inan et al., Citation2007). The ANFIS system can function in unstable, loud, and unreliable positions because it merges fuzzy logic and ANN methods (Liu & Ling, Citation2003). The ANFIS method improves the membership process and the related variable that reaches the target databases using the training process of neural networks (Wu et al., Citation2009). Because it can use specialist judgment, it generates more exact findings than the mean square error criteria. The algorithm of learning that ANFIS uses is a hybrid algorithm that merges the usage of the least squares method with the back-propagation algorithm. A specimen with two outputs and inputs is considered to simplify the procedure. ANFIS structure is created using five layers. The following list summarises the roles played by each layer:

  1. Layer 1: The membership values resulting from the input models and employed membership operations are this layer's outputs’ nodes (Rashidi et al., Citation2022). The outputs derived from these nodes are given below. To ease the system, y and x are considered to represent the nodes of input, B and A exist as the linguistic tags, and μBi and μAi are the functions of membership. (2) Oi1=μAi(x)fori=1,2Oi+21=μBi(x)fori=3,4(2) Most often, it is assumed that the membership functions, μAi andμBi, have bell-shaped distributions with maximum and lowest values of 1 and 0, respectively. In Eq. (2), where μi is the centre of a bell-shaped function of membership and σ1 is the standard deviation; (3) μ(x)=11+{xciai}2bi(3) where the premise parameters are ai, bi, and ci (Çaydaş et al., Citation2009).

  2. Layer 2: The mathematical multiplication method determines each rule's firing intensity in this layer. (4) Oi2=ωi=μAi(x)μBi(y)fori=1,2(4)

  3. Layer 3: The firing strengths normalisation is carried out in this layer. The node figures out the ith rule's firing strength concerning all other rules’ firing strengths. (5) Oi3=ω¯i=ωiω1+ω2fori=1,2(5)

  4. Layer 4: Per node in this layer outputs only the normalised firing strength multiplied by a first-order polynomial. The outputs are expressed as described in Eq. (6), where F1, as well as F2, exist as the if–then statements as stated below.

    • Rule 1: If x is A1 and y is B1 and etc.; then F1=p1x1+q1x2++r1

    • Rule 2: If x is A2 and y is B2 and etc.; then F2=p2x1+q2x2++r2 (6) Oi4=ω¯ifi=ω¯i(pix+qiy+ri)(6) where the variables directed to the following parameters are linear p, q, and r (Übeyli, Citation2008).

  5. Layer 5: This node adds up all of the signals from the fourth layer to determine the ANFIS's total output. (7) Oi5=ω¯ifi=iωifiiωi(7)

The ANFIS’s output is stated as follows: (8) fout=ω¯1f1+ω¯2f2=ω1ω1+ω2f1+ω2ω1+ω2f2=(ω¯1x)p1+(ω¯1y)q1+(ω¯1)r1+(ω¯2x)p2+(ω¯2y)q2+(ω¯2)r2(8)

3.2.3. ANN and ANFIS parameter selection

Overall, ANN and ANFIS parameter selection often involves a trial-and-error process, experimentation, and a good understanding of the problem domain. Automated approaches like hyperparameter optimisation can also be beneficial in finding optimal parameter values. The choice of parameters can significantly impact the model's performance, so investing time and effort in this phase of model development is crucial. The selection of parameters in Artificial Neural Networks (ANNs) and Adaptive Neuro-Fuzzy Inference Systems (ANFIS) typically involves a process known as training or tuning. Here's an overview of how parameters are selected for each of these models:

  • Artificial Neural Networks (ANNs): (i) Architecture Design: Initially, we need to decide on the architecture of the neural network, including the number of layers, the number of neurons in each layer, and the activation function to be used. This is often based on prior knowledge, domain expertise, or experimentation. (ii) Initialisation: The initial values of the weights and biases in the neural network need to be set. Common methods include random initialisation or Xavier/Glorot initialisation. (iii) Training Algorithm: You select an optimisation algorithm like gradient descent, stochastic gradient descent (SGD), Adam, or others to update the weights and biases during training. (iv) Loss Function: Choose an appropriate loss function that quantifies the difference between the predicted outputs and target values. Common loss functions include Mean Squared Error (MSE) for regression tasks and Cross-Entropy for classification tasks. (v) Hyperparameter Tuning: Parameters like learning rate, batch size, and the number of training epochs need to be tuned through grid search, random search, or Bayesian optimisation techniques to optimise the model's performance. (vi): Regularisation techniques like L1 or L2 regularisation can be applied to prevent overfitting. The regularisation strength (lambda) is another parameter to be chosen. (vii) Validation: A validation dataset monitors the model's performance during training and makes decisions about early stopping. (viii) Cross-Validation: For robust parameter selection and model evaluation, k-fold cross-validation can be employed.

  • Adaptive Neuro-Fuzzy Inference Systems (ANFIS), noting that, the ANFIS technique is a hybrid model combining fuzzy logic and neural networks: (i) Fuzzy Rule Base: Define the fuzzy rule base, including the number of rules, linguistic terms, and their associated membership functions. This often requires domain knowledge or expert input. (ii) Initialisation: Initialise the parameters of the membership functions and rule antecedents. Commonly used methods include grid partitioning or clustering data. (ii) Training Algorithm: ANFIS typically employs a hybrid learning algorithm that combines gradient-based methods with recursive least squares (RLS) or similar techniques to update the parameters. (iii) Loss Function: Like ANNs, choose an appropriate loss function that quantifies the error between the ANFIS output and the target values. (iv) Hyperparameter Tuning: Parameters related to the hybrid learning algorithm, such as learning rates or forgetting factors, may need to be tuned. (v) Validation: Validate the ANFIS model's performance using a validation dataset to ensure it generalises well.

3.3. Teaching-learning-base optimization (TLBO)

The control parameters for each swarm intelligence-based and evolutionary optimisation algorithm must be the same, such as the generation’s number, population size, the size of the elite, etc. Various algorithms need their specific settings in addition to the standard control parameters. For instance, the GA method utilises the mutation possibility, crossover possibility, and the operator of selection; the PSO method utilises the inertia weight, the cognitive and social variables, the bees’ number, and the limit; the ABC algorithm utilises the bees’ number; and the NSGA-II needs the mutation possibility, crossover possibility, and the index of distribution. The proper tweaking of these algorithm-specific variables greatly influences the algorithms’ function. When adjusting algorithm-specific parameters incorrectly, the result is either increased computing effort or a local optimum solution production. The work is increased since standard control parameters must be tweaked along with the parameters special to the algorithm. The TLBO method was created in response to the requirement to create an algorithm that doesn’t need specific variables.

The TLBO method was developed by (Rao et al., Citation2011; Rao, Savsani, and Balic, Citation2012; Rao, Savsani, and Vakharia, Citation2012) and (Rao & Savsani, Citation2012) and is based on a teacher's influence on a student's performance in the class. This algorithm outlines two fundamental methods of learning: (1) learning from a teacher (the teacher step) and (2) learning through interacting with other students (the learner step). As part of this algorithm, a population of students is taken into account, and the various subjects made available to them are taken into account with various optimisation design parameters. The results of a student are analogous to the optimisation problem's value of ‘fitness’, and the teacher is seen as the population's finest overall solution. The parameters that comprise the objective function of the presented optimisation issue are the design variables, and the objective function's value determines the optimal solution.

The ‘Teacher step’ and the ‘Learner step’ are the two steps in which TLBO operates. The operation of both stages is described below.

3.3.1. Teacher step

The first step of the algorithm is where the instructor teaches students. According to their abilities, the teacher attempts to improve the mean score of the class in the topic being taught during this phase. Consider that there are ‘m’ topics (i.e. design parameters) for every iteration I ‘n’ learners (i.e. population size, k = 1, 2, … , n), and that. Mj,i is the mean outcome of the learners for the subject ‘j’ (j = 1, 2, … , m). The outcome of the best learner kbest may be interpreted as the best general outcome Xtotalkbest,i, taking into account all the topics taken together. The TLBO algorithm, however, considers the best learner recognised as the instructor since the teacher is often thought of as a highly knowledgeable somebody who instructs students to get higher outcomes. According to the following formula, the difference between the current mean result for each topic and the teacher's matching result for each subject, (9) Difference_Meanj,k,i=ri(Xj,kbest,iTFMj,i)(9) where Xj,kbest,i represents the outcome for the top student in topic j. TF presents the teaching factor that specifies how much the value of the mean will vary, and ri is a random integer between [0, 1]. The value of TF may either be 1 or 2. TF values are chosen at random with the same likelihood as, (10) TF = round[1 + rand(0,1){2 - 1}](10) The method of TLBO doesn’t use TF as a parameter. The procedure uses EquationEq. (10) to determine the randomly TF value, which isn’t provided as input of the algorithm. It has been shown after several trials on numerous benchmark functions that the approach works best when the value of TF is between 1 and 2. Nevertheless, it is discovered that the method's performance is much improved if the TF value is 1 or 2. Accordingly, to simplify the procedure, the TF is advised to take 1 or 2 according to the rounding-up requirements provided by Eq (10). The present solution is changed in the teacher step following the following equation based on Difference_Meanj,k,i. (11) Xj,k,i=Xj,k,i+Difference_Meanj,i(11) where, Xj,k,i is the updated value of Xj,k,i. If Xj,k,i provides a superior function value, it is allowed. All of the function values that were approved after the step of the teacher are kept, and these values serve as the input for the step of the learner. The instructor phase affects the student phase.

3.3.2. Learner step

In the algorithm’s second stage, students engage with one another to expand their collective knowledge. To advance knowledge, a learner engages in random interactions with other learners. If the other student is more knowledgeable than the learner, the learner gains new information. The learning phenomena of this stage are described below using the ‘n’ population size as a reference.

Randomly choose Q and P as the two students such that XtotalP,i XtotalQ,i I (where, XtotalP,i and XtotalQ,i are the values of the updated function of P and Q's respective XtotalP,i and XtotalQ,i after the teacher phase) (12) Xj,P,i=Xj,P,i+ri(Xj,P,iXj,Q,i)ifXtotalP,i>XtotalQ,i(12) (13) Xj,P,i=Xj,P,i+ri(Xj,Q,iXj,P,i)ifXtotalQ,i>XtotalP,i(13) If Xj,P,i provides a superior function value, it is acceptable.

Equations (12) and (13) deal with minimisation issues. Eqs. (14) and (15) are utilised to maximise issues. (14) Xj,P,i=Xj,P,i+ri(Xj,P,iXj,Q,i)ifXtotalQ,i>XtotalP,i(14) (15) Xj,P,i=Xj,P,i+ri(Xj,Q,iXj,P,i)ifXtotalP,i>XtotalQ,i(15) A population-based algorithm called teaching-learning-based optimization (TLBO) replicates the teaching-learning procedure in a classroom. This method doesn’t need any control variables of algorithm-specific; it simply needs standard control variables like population size and generational length.

In summary, Teaching-Learning-Based Optimization is a population-based optimisation technique that simulates the teaching and learning processes in a classroom to improve solutions to optimisation problems iteratively. It's a relatively simple but effective approach and can be used in various domains where optimisation is required. TLBO has been applied to various optimisation problems, including mathematical optimisation, engineering design, and machine learning model tuning. It's known for its simplicity and ability to converge to good solutions, especially for continuous optimisation problems. However, it may not always outperform more advanced optimisation algorithms on complex problems.

As stated, the TLBO is a population-based optimisation algorithm inspired by the teaching and learning processes observed in a classroom. It includes different steps, such as Initialization: Start with an initial population of potential solutions (individuals or learners). These solutions are treated as students in a classroom. Teaching Phase: Each student evaluates their fitness or performance concerning the problem being solved. The best-performing student (teacher) among the population is identified based on their fitness. Also, the teacher guides and influences the other students to improve their understanding (solutions) by sharing their knowledge or solutions. Learning Phase: Students (other than the teacher) update their solutions based on a combination of their understanding (previous solution) and the guidance provided by the teacher. This is akin to students learning from the best-performing students in a classroom. The learning process incorporates randomness, allowing for the exploration of different solutions. Update Population: After the teaching and learning phases, the population is updated with the new solutions. The best-performing solution found so far is retained as the best solution. Termination Criteria: The algorithm repeats the teaching and learning phases for several iterations or until a termination condition is met (e.g. a satisfactory solution is found). Output: The final solution obtained after the algorithm terminates is considered the optimised solution to the problem.

4. Results and discussion

The cooling and heating demand may be predicted using the MLP, ANFIS, and TLBO networks. A dataset was used as the training input per network. Each network needed a preliminary design in the first step to establishing the neurons’ number of a hidden layer and the network coefficients. The quantity of train and test data for each network is established after its design. To verify the training phase per network, 80% of the models in this study were used as training data (4-fold) and 20% as test data (1-fold).

4.1. Accuracy indicators

The outcomes of any ANN algorithm need to be assessed after training and testing. To achieve this, statistical indicators of performance like the coefficient of determination (R2) and root mean square error (RMSE) may be used. The following formulae (Choubin et al., Citation2016) are used to compute each of the indices above. (16) RMSE=1Ui=1U[(SiobservedSipredicted)]2(16) (17) R2=1i=1U(SipredictedSiobserved)2i=1U(SiobservedS¯observed)2(17) where Si observed and Si anticipate represent, respectively, the actual and anticipated CL values of the green residential building. The parameters U and s¯Observed denote the total occurrences number and the mean of the CL actual values. Using the enhanced data set, machine-learning models were built in the environment of Weka software. The results of this procedure are given in the section that follows.

4.2. Incorporated FIS and MLP with TLBO optimizer

The calculated ANFIS and MLP mathematical equations were presented to the TLBO as the primary issue. This part will assess how the validation and training dataset’s size was chosen for the cross-validation procedure. As new validation and training sets are picked randomly from the 4 folds of initial training sets before going via the validation procedure, the k-fold 5 testing dataset is left unmodified and utilised to evaluate the forecast execution of various methods. The training and the validation set's population size are split into the following numbers: 50, 100, 150, 200, 250, 300, 350, 400, 450, and 500. In order to give each network a fair chance of reducing error, 1000 repetitions of each network were used to implement it. The results of the operation above are ten convergence curves, shown in Figure . Choosing the predictor variables and building the model remain the same, but the latest validation and training sets are utilised separately to replace the initial training and validation set. Figure  displays the prediction effectiveness of models founded on MSE value utilising training and validation sets with various sample sizes. This graph shows that the TLBO-MLP method yields the most accurate results since it has the lowest MSE value.

Figure 7. Mean squared error variation versus iterations for the (a) TLBOANFIS, (b) TLBOMLP.

Figure 7. Mean squared error variation versus iterations for the (a) TLBOANFIS, (b) TLBOMLP.

Figure 8. The accuracy of the training data performance of TLBOANFIS. (a) TLBOANFIS train Np = 50; (b) TLBOANFIS test Np = 100; (c) TLBOANFIS train Np = 150; (d) TLBOANFIS test Np = 200; (e) TLBOANFIS train Np = 250; (f) TLBOANFIS test Np = 300; (g) TLBOANFIS train Np = 350; (h) TLBOANFIS test Np = 400; (i) TLBOANFIS train Np = 450; (j) TLBOANFIS test Np = 500.

Figure 9. The accuracy of the testing data performance of TLBOANFIS. (a) TLBOANFIS test Np = 50; (b) TLBOANFIS test Np = 100; (c) TLBOANFIS test Np = 150; (d) TLBOANFIS test Np = 200; (e) TLBOANFIS test Np = 250; (f) TLBOANFIS test Np = 300; (g) TLBOANFIS test Np = 350; (h) TLBOANFIS test Np = 400; (i) TLBOANFIS test Np = 450; (j) TLBOANFIS test Np = 500.

Figure 10. The accuracy of the training data performance of TLBOMLP. (a) TLBOMLP train Np = 50; (b) TLBOMLP train Np = 100; (c) TLBOMLP train Np = 150; (d) TLBOMLP train Np = 200; (e) TLBOMLP train Np = 250; (f) TLBOMLP train Np = 300; (g) TLBOMLP train Np = 350; (h) TLBOMLP train Np = 400; (i) TLBOMLP train Np = 450; (j) TLBOMLP train Np = 500.

Figure 11. The accuracy of the training data performance of TLBOMLP. (a) TLBOMLP test Np = 50; (b) TLBOMLP test Np = 100; (c) TLBOMLP test Np = 150; (d) TLBOMLP test Np = 200; (e) TLBOMLP test Np = 250; (f) TLBOMLP test Np = 300; (g) TLBOMLP test Np = 350; (h) TLBOMLP test Np = 400; (i) TLBOMLP test Np = 450; (j) TLBOMLP test Np = 500.

Figure 12. Minimum error value in TLBOANFIS-250 best-fit structure. (a) Training phase; (b) Testing phase.

Figure 12. Minimum error value in TLBOANFIS-250 best-fit structure. (a) Training phase; (b) Testing phase.

Figure 13. Minimum error value in TLBOMLP-350 best-fit structure. (a) Training phase; (b) Testing phase.

Figure 13. Minimum error value in TLBOMLP-350 best-fit structure. (a) Training phase; (b) Testing phase.

The performance metrics of the TLBO-ANFIS and TLBO-MLP samples with ten population dimensions for forecasting cooling loads in buildings employing train and test data are shown in Tables  and . These models produced consistent results with R2 values between 0.96 and 0.97 and RMSE values between 0.06 and 0.12. With R2 of (0.97585 and 0.9721) and RMSE of (0.11176 and 0.12035) in the train and test steps, 250 is the ideal population size for the TLBO-ANFIS. With population dimensions 350, the TLBO-MLP method has the greatest R2 of (0.96446 and 0.95855) and the most inferior RMSE of (0.0685 and 0.07074) in the train and test steps. The findings demonstrate that the TLBO-MLP method with the lower RMSE value performs better and is more accurate at estimating cooling demand.

Table 2. The results of the network for the TLBOANFIS with different population sizes.

Table 3. The results of the network for the TLBOMLP with different population sizes.

Figures display the TLBO-ANFIS network's training and testing phases’ excellent correlation coefficient among the true and anticipated values. For the TLBO-MLP network in the training step, Figure  indicates a great coefficient of determination between the true and the predicted values.

It is evident that per of these networks has completed the training step, given the high correlation between the goal data and the per network's output (as indicated in Figures ). To understand how much CL is needed per building, each network must learn how to recognise underlying patterns like the data and forecast the anonymous data by applying the learned patterns. With this training, per network can estimate the CLs based on the test phase's input data. Each network is verified utilising initial testing data after training, which functions as a test for the network's internal training step.

Figures and for the TLBO-ANFIS and TLBO-MLP networks show the forecast error in the training and test, which is a crucial metric in assessing the outcomes. The lowest and highest forecast error is shown by the error found in the sample of the error histogram. This illustrates that each trained network may have an error in forecasting the CLs for the test dataset equal to the amount shown in the preceding figures. The error values are [−0.0009788, 0.068569] and [−0.0033812, 0.090939] for TLBO-MLP and [−0.0003098752, 0.11193] and [−0.00072661, 0.12081] for TLBO-ANFIS methods in training and testing phases, respectively. Also, the training MAEs of 0.078967 and 0.050435 and testing MAEs of 0.08107 and 0.051696 demonstrate the TLBO-MLP method's highest accuracy due to its lower error.

Table  provides a performance assessment of the suggested approaches regarding R2 and RMSE. Table 's findings show that the prediction of the cooling load using the TLBO-MLP approach, which had the greatest R2 value (0.96446) and fewest mistakes in the shape of RMSE, was the best forecast (0.0685). The TLBO-ANFIS technique was likewise associated with higher RMSE error of forecast values in the cooling load prediction.

Table 4. The network results for the TLBOANFIS and TLBOMLP.

Figure  shows the Taylor diagram for the current database. Taylor diagrams (Taylor, Citation2001) present a graphical summary of how observations and a pattern (or a pattern’ set) are closely matched. The resemblance between the two patterns is quantified regarding their centred root-mean-square difference, correlation, and their variations’ amplitude (presented by standard deviations). Taylor diagrams help appraise multiple features of complex approaches or gauge the comparative skill of various methods (Smithson, Citation2002).

Figure 14. Taylor diagram (a) TLBOANFIS training 250, (b) TLBOANFIS testing 250, (c) TLBOMLP training 350, (d) TLBOMLP testing 350. (a) TLBOANFIS training 250; (b) TLBOANFIS testing 250; (c) TLBOMLP training 350; (d) TLBOMLP testing 350.

Figure 14. Taylor diagram (a) TLBOANFIS training 250, (b) TLBOANFIS testing 250, (c) TLBOMLP training 350, (d) TLBOMLP testing 350. (a) TLBOANFIS training 250; (b) TLBOANFIS testing 250; (c) TLBOMLP training 350; (d) TLBOMLP testing 350.

The statistical variables are presented in a Taylor Diagram, a fundamental graphical tool to perform a comparative evaluation of the TLBO-ANFIS and TLBO-MLP methods regarding the actual database. The Taylor Diagram illustrates a substantial statistical numerical analysis containing the standard deviation between the predicted and original values utilising TLBO-ANFIS and TLBO-MLP methods. Based on Figure , the TLBO-ANFIS and TLBO-MLP methods correlate with the correlation coefficient (R2) values of 0.97585 and 0.96446 for training and 0.9721 and 0.95855 for testing, respectively. This is also supported by statistical parameters such as overall RMSE, where the TLBO-ANFIS model is 0.11176 and 0.12035. In contrast, the TLBO-MLP model is 0.0685 and 0.07074 in the training and testing phases.

4.3. Discussion

A residential green building's comparatively compactness, general height, surface space, roof area, wall space, orientation, glazing area, and distribution are eight criteria determining a building's cooling load. This research used two distinct methods, TLBO-ANFIS and TLBO-MLP, to forecast the intensities of cooling loads in residential structures. More evolutionary techniques will be used for predicting the energy of buildings in the future; however, machine learning is a study field that is rapidly increasing. A relevant domestic source of energy data for subsequent research on other strategies will be the yearly residence cooling load intensity database created in the current study. The construction space CL intensities data used in this investigation were produced using a MATLAB simulation. The suggested method, nevertheless, is equally relevant to the cooling demand for building operations. The gathering of precise building energy usage data at a fine scale will be possible with the growing usage of smart metres and the Internet of Things (IoT), even if it is now difficult to get enough building operating energy consumption information.

Based on the size and features of the structure, the established technique can provide an accurate estimate of the required cooling load for a prospective building project. The simulations could be helpful to engineers and building owners when designing HVAC systems. Modifying structural design and architecture depending on input parameters is another early-stage support method for reconstruction projects. As a result, it is also possible to analyze each input parameter's impact separately to comprehend the thermal load's behaviour. The TLBO-MLP accurately predicts the trend despite being neither predictable nor consistent. As a result, this method might produce accurate models of real-world structures.

The TLBO-ANFIS and TLBO-MLP are two optimisation algorithms combined with machine learning techniques commonly used for predicting cooling loads in smart buildings. While these approaches have their advantages, they also have certain limitations. Here are some limitations of TLBO-ANFIS and TLBO-MLP in predicting cooling loads:

  1. Data availability and quality: The performance of both TLBO-ANFIS and TLBO-MLP heavily relies on the availability and quality of training data. Insufficient or inaccurate data can lead to suboptimal predictions and reduced accuracy.

  2. Sensitivity to input features: The performance of ANFIS and MLP models depends on the selection and representation of input features. Inaccurate or irrelevant features can negatively impact the prediction accuracy.

  3. Overfitting: ANFIS and MLP models are prone to overfitting, especially when dealing with complex datasets. Overfitting occurs when the model becomes too specialised to the training data, leading to poor generalisation of unseen data.

  4. Computational complexity: Training ANFIS and MLP models using TLBO optimisation can be computationally expensive, particularly when dealing with large datasets or complex architectures. The training process may require significant computational resources and time.

  5. Lack of interpretability: ANFIS and MLP models are black-box models, meaning it can be challenging to interpret and understand the underlying relationships between input features and cooling load predictions. This lack of interpretability can limit their usefulness in certain scenarios requiring explaining ability.

  6. Limited generalisation: Although ANFIS and MLP models can achieve high accuracy during training, they may struggle to generalise well to unseen data or new building conditions. Changes in building characteristics, operational parameters, or climate conditions not present in the training data may lead to reduced prediction performance.

  7. Model complexity and hyperparameter tuning: The performance of ANFIS and MLP models heavily depends on the selection and tuning of various hyperparameters, such as the number of layers, nodes, and learning rates. Determining the optimal configuration can be challenging, requiring expertise and iterative experimentation.

  8. Lack of adaptability: TLBO-ANFIS and TLBO-MLP models may have limited adaptability to real-time changes in building conditions or dynamic operating scenarios. They are typically trained on historical data and may be unable to adapt quickly to sudden changes or unforeseen events.

It's important to note that while TLBO-ANFIS and TLBO-MLP have these limitations, they can still be valuable tools for cooling load prediction in smart buildings. However, it's crucial to be aware of their constraints and consider alternative approaches or techniques when necessary.

5. Conclusions

Forecasting buildings’ heating and cooling loads have become more difficult due to the significance of energy conservation and its management. To boost the accuracy of their predictions of CL, most researchers in this subject provide a variety of methodologies and models. This article suggested TLBO-ANFIS and TLBO-MLP approaches to forecasting the cooling of a residing structure through machine learning standards. The major goal of these techniques was to improve prediction accuracy by establishing a linear mapping among the input and output parameters. The cooling load was employed as the output variable per network within the training phase after the technical specifications of a residential building were employed as inputs during the creation of each of the suggested models. The trained networks were tested, and cooling load predictions were made using new, anonymous data. Finally, the cooling load projections may be accurately provided by each trained network. The TLBO-MLP predicted the cooling load with the greatest R2, i.e. 0.96446 and 0.95855, and the lowest RMSE, i.e. 0.0685 and 0.07074, shows the best performance in predicting cooling load. Also, the TLBO-ANFIS approach with the R2 of 0.0.97585 and 0.9721 and RMSE of 0.11176 and 0.12035 shows a good accuracy level. Despite some of their shortcomings, TLBO-ANFIS and TLBO-MLP can be useful tools for predicting cooling load in smart buildings. In light of the limitations of the research, potential ideas for future projects were also presented, including data improvement and future project selection, optimising building characteristics using the model, and contrasting the model with improved time-saving techniques. The TLBO-MLP methodology was presented for use in real-world scenarios.

Acknowledgments

This research was funded by the National Natural Science Foundation of China (Project No. 52008115).

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research was funded by the National Natural Science Foundation of China (Project No. 52008115).

References

  • Adnan, R. M., Mostafa, R. R., Dai, H.-L., Heddam, S., Kuriqi, A., & Kisi, O. (2023). Pan evaporation estimation by relevance vector machine tuned with new metaheuristic algorithms using limited climatic data. Engineering Applications of Computational Fluid Mechanics, 17(1), 2192258. https://doi.org/10.1080/19942060.2023.2192258
  • Aksoy, U. T., & Inalli, M. (2006). Impacts of some building passive design parameters on heating demand for a cold region. Building and Environment, 41(12), 1742–1754. https://doi.org/10.1016/j.buildenv.2005.07.011
  • Alasha’ary, H., Moghtaderi, B., Page, A., & Sugo, H. (2009). A neuro–fuzzy model for prediction of the indoor temperature in typical Australian residential buildings. Energy and Buildings, 41(7), 703–710. https://doi.org/10.1016/j.enbuild.2009.02.002
  • Al-Saadi, S. N., & Zhai, Z. J. (2015). A new validated TRNSYS module for simulating latent heat storage walls. Energy and Buildings, 109, 274–290. https://doi.org/10.1016/j.enbuild.2015.10.013
  • Anđelković, A. S., Mujan, I., & Dakić, S. (2016). Experimental validation of a EnergyPlus model: Application of a multi-storey naturally ventilated double skin façade. Energy and Buildings, 118, 27–36. https://doi.org/10.1016/j.enbuild.2016.02.045
  • Andrews, D. F. (1972). Plots of high-dimensional data. Biometrics, 28, 125–136. https://doi.org/10.2307/2528964
  • Ansari Manesh, M., Sarkardehee, E., & Jafarian, S. (2023). Effective environmental factors on efficiency of office buildings staff in a cold semi-arid climate (case study: Kermanshah). Renewable Energy Research and Applications, 4(1), 103–111. https://doi.org/10.22044/rera.2022.11758.1108
  • Asimov, D. (1985). The grand tour: A tool for viewing multidimensional data. SIAM Journal on Scientific and Statistical Computing, 6(1), 128–143. https://doi.org/10.1137/0906011
  • Ayata, T., Çam, E., & Yıldız, O. (2007). Adaptive neuro-fuzzy inference systems (ANFIS) application to investigate potential use of natural ventilation in new building designs in Turkey. Energy Conversion and Management, 48(5), 1472–1479. https://doi.org/10.1016/j.enconman.2006.12.008
  • Çaydaş, U., Hasçalık, A., & Ekici, S. (2009). An adaptive neuro-fuzzy inference system (ANFIS) model for wire-EDM. Expert Systems with Applications, 36(3), 6135–6139. https://doi.org/10.1016/j.eswa.2008.07.019
  • Chaiyapinunt, S., Phueakphongsuriya, B., Mongkornsaksit, K., & Khomporn, N. (2005). Performance rating of glass windows and glass windows with films in aspect of thermal comfort and heat transmission. Energy and Buildings, 37(7), 725–738. https://doi.org/10.1016/j.enbuild.2004.10.008
  • Choubin, B., Khalighi-Sigaroodi, S., Malekian, A., & Kişi, Ö. (2016). Multiple linear regression, multi-layer perceptron network and adaptive neuro-fuzzy inference system for forecasting precipitation based on large-scale climate signals. Hydrological Sciences Journal, 61(6), 1001–1009. https://doi.org/10.1080/02626667.2014.966721
  • Cui, W., Li, X., Li, X., Si, T., Lu, L., Ma, T., & Wang, Q. (2022). Thermal performance of modified melamine foam/graphene/paraffin wax composite phase change materials for solar-thermal energy conversion and storage. Journal of Cleaner Production, 367, 133031. https://doi.org/10.1016/j.jclepro.2022.133031
  • Dai, Z., Li, T., Xiang, Z.-R., Zhang, W., & Zhang, J. (2023). Aerodynamic multi-objective optimization on train nose shape using feedforward neural network and sample expansion strategy. Engineering Applications of Computational Fluid Mechanics, 17(1), 2226187. https://doi.org/10.1080/19942060.2023.2226187
  • Dalamagkidis, K., Kolokotsa, D., Kalaitzakis, K., & Stavrakakis, G. S. (2007). Reinforcement learning for energy conservation and comfort in buildings. Building and Environment, 42(7), 2686–2698. https://doi.org/10.1016/j.buildenv.2006.07.010
  • Das, M. K., & Kishor, N. (2009). Adaptive fuzzy model identification to predict the heat transfer coefficient in pool boiling of distilled water. Expert Systems with Applications, 36(2), 1142–1154. https://doi.org/10.1016/j.eswa.2007.10.044
  • Deb, C., Eang, L. S., Yang, J., & Santamouris, M. (2016). Forecasting diurnal cooling energy load for institutional buildings using artificial neural networks. Energy and Buildings, 121, 284–297. https://doi.org/10.1016/j.enbuild.2015.12.050
  • Deng, T., Chen, Z., Fu, J.-Y., & Li, Y. (2023). An improved inflow turbulence generator for large eddy simulation evaluation of wind effects on tall buildings. Engineering Applications of Computational Fluid Mechanics, 17(1), e2155704. https://doi.org/10.1080/19942060.2022.2155704
  • Farhanieh, B., & Sattari, S. (2006). Simulation of energy saving in Iranian buildings using integrative modelling for insulation. Renewable Energy, 31(4), 417–425. https://doi.org/10.1016/j.renene.2005.04.004
  • Haykin, S. (2009). Neural networks and learning machines, 3/E. Pearson Education India.
  • Huang, Y., & Li, C. (2021). Accurate heating, ventilation and air conditioning system load prediction for residential buildings using improved ant colony optimization and wavelet neural network. Journal of Building Engineering, 35, 101972. https://doi.org/10.1016/j.jobe.2020.101972
  • Inan, G., Göktepe, A., Ramyar, K., & Sezer, A. (2007). Prediction of sulfate expansion of PC mortar using adaptive neuro-fuzzy methodology. Building and Environment, 42(3), 1264–1269. https://doi.org/10.1016/j.buildenv.2005.11.029
  • Jang, J.-S. R. (1992). Self-learning fuzzy controllers based on temporal backpropagation. IEEE Transactions on Neural Networks, 3(5), 714–723. https://doi.org/10.1109/72.159060
  • Jassar, S., Liao, Z., & Zhao, L. (2009). Adaptive neuro-fuzzy based inferential sensor model for estimating the average air temperature in space heating systems. Building and Environment, 44(8), 1609–1616. https://doi.org/10.1016/j.buildenv.2008.10.002
  • Khedher, N. B., Mukhtar, A., Md Yasir, A. S. H., Khalilpoor, N., Foong, L. K., Nguyen Le, B., & Yildizhan, H. (2023). Approximating heat loss in smart buildings through large scale experimental and computational intelligence solutions. Engineering Applications of Computational Fluid Mechanics, 17(1), 2226725. https://doi.org/10.1080/19942060.2023.2226725
  • Kim, M. K., Kim, Y.-S., & Srebric, J. (2020). Impact of correlation of plug load data, occupancy rates and local weather conditions on electricity consumption in a building using four back-propagation neural network models. Sustainable Cities and Society, 62, 102321. https://doi.org/10.1016/j.scs.2020.102321
  • Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy estimation and model selection. Ijcai.
  • Koschwitz, D., Frisch, J., & Van Treeck, C. (2018). Data-driven heating and cooling load predictions for non-residential buildings based on support vector machine regression and NARX recurrent neural network: A comparative study on district scale. Energy, 165, 134–142. https://doi.org/10.1016/j.energy.2018.09.068
  • Le, L. T., Nguyen, H., Dou, J., & Zhou, J. (2019). A comparative study of PSO-ANN, GA-ANN, ICA-ANN, and ABC-ANN in estimating the heating load of buildings’ energy efficiency for smart city planning. Applied Sciences, 9(13), 2630. https://doi.org/10.3390/app9132630
  • Li, F., Yan, J., Yan, H., Tao, T., & Duan, H.-F. (2023). 2D modelling and energy analysis of entrapped air-pocket propagation and spring-like geysering in the drainage pipeline system. Engineering Applications of Computational Fluid Mechanics, 17(1), 2227662. https://doi.org/10.1080/19942060.2023.2227662
  • Li, X., & Yao, R. (2020). A machine-learning-based approach to predict residential annual space heating and cooling loads considering occupant behaviour. Energy, 212, 118676. https://doi.org/10.1016/j.energy.2020.118676
  • Liu, M., & Ling, Y. Y. (2003). Using fuzzy neural network approach to estimate contractors’ markup. Building and Environment, 38(11), 1303–1308. https://doi.org/10.1016/S0360-1323(03)00135-5
  • Luo, X., Oyedele, L. O., Ajayi, A. O., & Akinade, O. O. (2020). Comparative study of machine learning-based multi-objective prediction framework for multiple building energy loads. Sustainable Cities and Society, 61, 102283. https://doi.org/10.1016/j.scs.2020.102283
  • Luo, Z., Wang, H., & Li, S. (2022). Prediction of international roughness index based on stacking fusion model. Sustainability, 14(12), 6949. https://doi.org/10.3390/su14126949
  • Marks, W. (1997). Multicriteria optimisation of shape of energy-saving buildings. Building and Environment, 32(4), 331–339. https://doi.org/10.1016/S0360-1323(96)00065-0
  • Mellit, A., Kalogirou, S. A., Hontoria, L., & Shaari, S. (2009). Artificial intelligence techniques for sizing photovoltaic systems: A review. Renewable and Sustainable Energy Reviews, 13(2), 406–419. https://doi.org/10.1016/j.rser.2008.01.006
  • Meng, Q., Lai, X., Yan, Z., Su, C.-Y., & Wu, M. (2022). Motion planning and adaptive neural tracking control of an uncertain two-link rigid-flexible manipulator with vibration amplitude constraint. IEEE Transactions on Neural Networks and Learning Systems, 33(8), 3814–3828. https://doi.org/10.1109/TNNLS.2021.3054611
  • Moayedi, H., & Jahed Armaghani, D. (2018). Optimizing an ANN model with ICA for estimating bearing capacity of driven pile in cohesionless soil. Engineering with Computers, 34(2), 347–356. https://doi.org/10.1007/s00366-017-0545-7
  • Moayedi, H., Mu'azu, M. A., & Foong, L. K. (2020). Novel swarm-based approach for predicting the cooling load of residential buildings based on social behavior of elephant herds. Energy and Buildings, 206, 109579. https://doi.org/10.1016/j.enbuild.2019.109579
  • Moradzadeh, A., & Khaffafi, K. (2017). Comparison and evaluation of the performance of various types of neural networks for planning issues related to optimal management of charging and discharging electric cars in intelligent power grids. Emerging Science Journal, 1(4), 201–207.
  • Moradzadeh, A., & Pourhossein, K. (2019, August 27–29). Early detection of turn-to-turn faults in power transformer winding: An experimental study. 2019 International aegean conference on electrical machines and power electronics (ACEMP) & 2019 international conference on optimization of electrical and electronic equipment (OPTIM), Istanbul, Turkey.
  • Nazari, M. A., Rungamornrat, J., Prokop, L., Blazek, V., Misak, S., Al-Bahrani, M., & Ahmadi, M. H. (2023). An updated review on integration of solar photovoltaic modules and heat pumps towards decarbonization of buildings. Energy for Sustainable Development, 72, 230–242. https://doi.org/10.1016/j.esd.2022.12.018
  • Nguyen, H., Moayedi, H., Foong, L. K., Al Najjar, H. A. H., Jusoh, W. A. W., Rashid, A. S. A., & Jamali, J. (2020). Optimizing ANN models with PSO for predicting short building seismic response. Engineering with Computers, 36(3), 823–837. https://doi.org/10.1007/s00366-019-00733-0
  • Ochoa, C. E., & Capeluto, I. G. (2009). Advice tool for early design stages of intelligent facades based on energy and visual comfort approach. Energy and Buildings, 41(5), 480–488. https://doi.org/10.1016/j.enbuild.2008.11.015
  • Pedersen, L., Stang, J., & Ulseth, R. (2008). Load prediction method for heat and electricity demand in buildings for the purpose of planning for mixed energy distribution systems. Energy and Buildings, 40(7), 1124–1134. https://doi.org/10.1016/j.enbuild.2007.10.014
  • Qiang, G., Zhe, T., Yan, D., & Neng, Z. (2015). An improved office building cooling load prediction model based on multivariable linear regression. Energy and Buildings, 107, 445–455. https://doi.org/10.1016/j.enbuild.2015.08.041
  • Rao, R. V., Savsani, V., & Balic, J. (2012). Teaching–learning-based optimization algorithm for unconstrained and constrained real-parameter optimization problems. Engineering Optimization, 44(12), 1447–1462. https://doi.org/10.1080/0305215X.2011.652103
  • Rao, R. V., & Savsani, V. J. (2012). Mechanical design optimization using advanced optimization techniques.
  • Rao, R. V., Savsani, V. J., & Vakharia, D. (2011). Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Computer-aided Design, 43(3), 303–315. https://doi.org/10.1016/j.cad.2010.12.015
  • Rao, R. V., Savsani, V. J., & Vakharia, D. (2012). Teaching–learning-based optimization: An optimization method for continuous non-linear large scale problems. Information Sciences, 183(1), 1–15. https://doi.org/10.1016/j.ins.2011.08.006
  • Rashidi, M., Alhuyi Nazari, M., Mahariq, I., & Ali, N. (2022). Modeling and sensitivity analysis of thermal conductivity of ethylene glycol-water based nanofluids with alumina nanoparticles. Experimental Techniques, 47, 83–90. https://doi.org/10.1007/s40799-022-00567-4.
  • Reppel, J., & Edmonds, I. (1998). Angle-selective glazing for radiant heat control in buildings: Theory. Solar Energy, 62(3), 245–253. https://doi.org/10.1016/S0038-092X(98)00006-1
  • Sayigh, A., & Marafia, A. H. (1998). Thermal comfort and the development of bioclimatic concept in building design. Renewable and Sustainable Energy Reviews, 2(1–2), 3–24. https://doi.org/10.1016/S1364-0321(98)00009-4
  • Seo, D. K., & Eo, Y. D. (2019). Multilayer perceptron-based phenological and radiometric normalization for high-resolution satellite imagery. Applied Sciences, 9(21), 4543. https://doi.org/10.3390/app9214543
  • Shariat, M., Shariati, M., Madadi, A., & Wakil, K. (2018). Computational Lagrangian multiplier method by using for optimization and sensitivity analysis of rectangular reinforced concrete beams. Steel and Composite Structures, 29(2), 243–256.
  • Shariati, M., Davoodnabi, S. M., Toghroli, A., Kong, Z., & Shariati, A. (2021). Hybridization of metaheuristic algorithms with adaptive neuro-fuzzy inference system to predict load-slip behavior of angle shear connectors at elevated temperatures. Composite Structures, 278, 114524. https://doi.org/10.1016/j.compstruct.2021.114524
  • Shi, R. (2023). Numerical simulation of inertial microfluidics: A review. Engineering Applications of Computational Fluid Mechanics, 17(1), 2177350. https://doi.org/10.1080/19942060.2023.2177350
  • Shirvani, A., Nili-Ahmadabadi, M., & Ha, M. Y. (2023). Machine learning-accelerated aerodynamic inverse design. Engineering Applications of Computational Fluid Mechanics, 17(1), 2237611. https://doi.org/10.1080/19942060.2023.2237611
  • Singh, M. K., Mahapatra, S., & Atreya, S. (2009). Bioclimatism and vernacular architecture of north-east India. Building and Environment, 44(5), 878–888. https://doi.org/10.1016/j.buildenv.2008.06.008
  • Singh, T., Sinha, S., & Singh, V. (2007). Prediction of thermal conductivity of rock through physico-mechanical properties. Building and Environment, 42(1), 146–155. https://doi.org/10.1016/j.buildenv.2005.08.022
  • Smithson, P. A. (2002). Ipcc, 2001: Climate change 2001: The scientific basis. In J. T. Houghton, Y. Ding, D. J. Griggs, M. Noguer, P. J. van der Linden, X. Dai, K. Maskell, & C. A. Johnson (Eds.), Contribution of working group 1 to the third assessment report of the intergovernmental panel on climate change (p. 881). Cambridge University Press, and New York, USA, 2001. Price£ 34.95, US 49.95,ISBN0-521-01495-6(paperback).£90.00,US 130.00, ISBN 0-521-80767-0 (hardback). In: Wiley Online Library.
  • Stone, M. (1977). Asymptotics for and against cross-validation. Biometrika, 64, 29–35. https://doi.org/10.1093/biomet/64.1.29
  • Subasi, A., Yilmaz, A. S., & Binici, H. (2009). Prediction of early heat of hydration of plain and blended cements using neuro-fuzzy modelling techniques. Expert Systems with Applications, 36(3), 4940–4950. https://doi.org/10.1016/j.eswa.2008.06.015
  • Sun, Y., Wang, S., & Xiao, F. (2013). Development and validation of a simplified online cooling load prediction strategy for a super high-rise building in Hong Kong. Energy Conversion and Management, 68, 20–27. https://doi.org/10.1016/j.enconman.2013.01.002
  • Synnefa, A., Santamouris, M., & Akbari, H. (2007). Estimating the effect of using cool coatings on energy loads and thermal comfort in residential buildings in various climatic conditions. Energy and Buildings, 39(11), 1167–1174. https://doi.org/10.1016/j.enbuild.2007.01.004
  • Tao, H., Alawi, O. A., Hussein, O. A., Ahmed, W., Eltaweel, M., Homod, R. Z., Abdelrazek, A. H., Falah, M. W., Al-Ansari, N., & Yaseen, Z. M. (2023). Influence of water based binary composite nanofluids on thermal performance of solar thermal technologies: Sustainability assessments. Engineering Applications of Computational Fluid Mechanics, 17(1), 2159881. https://doi.org/10.1080/19942060.2022.2159881
  • Tao, H., Aldlemy, M. S., Alawi, O. A., Kamar, H. M., Homod, R. Z., Mohammed, H. A., Mohammed, M. K. A., Mallah, A. R., Al-Ansari, N., & Yaseen, Z. M. (2023). Energy and cost management of different mixing ratios and morphologies on mono and hybrid nanofluids in collector technologies. Engineering Applications of Computational Fluid Mechanics, 17(1), 2164620. https://doi.org/10.1080/19942060.2022.2164620
  • Taylor, K. E. (2001). Summarizing multiple aspects of model performance in a single diagram. Journal of Geophysical Research: Atmospheres, 106(D7), 7183–7192. https://doi.org/10.1029/2000JD900719
  • Thimm, G., & Fiesler, E. (1997). High-order and multilayer perceptron initialization. IEEE Transactions on Neural Networks, 8(2), 349–359. https://doi.org/10.1109/72.557673
  • Tsanas, A., & Xifara, A. (2012). Accurate quantitative estimation of energy performance of residential buildings using statistical machine learning tools. Energy and Buildings, 49, 560–567. https://doi.org/10.1016/j.enbuild.2012.03.003
  • Tzikopoulos, A., Karatza, M., & Paravantis, J. (2005). Modeling energy efficiency of bioclimatic buildings. Energy and Buildings, 37(5), 529–544. https://doi.org/10.1016/j.enbuild.2004.09.002
  • Übeyli, E. D. (2008). Adaptive neuro-fuzzy inference system employing wavelet coefficients for detection of ophthalmic arterial disorders. Expert Systems with Applications, 34(3), 2201–2209. https://doi.org/10.1016/j.eswa.2007.02.020
  • Wang, J., Liang, F., Zhou, H., Yang, M., & Wang, Q. (2022). Analysis of position, pose and force decoupling characteristics of a 4-UPS/1-RPS parallel grinding robot. Symmetry, 14(4), 825. https://doi.org/10.3390/sym14040825
  • Wegman, E. J., & Shen, J. (1993). Three-dimensional andrews plots and the grand tour. Computing Science and Statistics, 25, 284–288. https://doi.org/10.1007/978-1-4612-2856-1
  • Wu, J.-D., Hsu, C.-C., & Chen, H.-C. (2009). An expert system of price forecasting for used cars using adaptive neuro-fuzzy inference. Expert Systems with Applications, 36(4), 7809–7817. https://doi.org/10.1016/j.eswa.2008.11.019
  • Yan, B., Ma, C., Zhao, Y., Hu, N., & Guo, L. (2019). Geometrically enabled soft electroactuators via laser cutting. Advanced Engineering Materials, 21(11), 1900664. https://doi.org/10.1002/adem.201900664
  • Yang, H., Zhu, Z., & Burnett, J. (2000). Simulation of the behaviour of transparent insulation materials in buildings in northern China. Applied Energy, 67(3), 293–306. https://doi.org/10.1016/S0306-2619(00)00022-2
  • Yao, R., Li, B., & Liu, J. (2009). A theoretical adaptive model of thermal comfort–adaptive predicted mean vote (aPMV). Building and Environment, 44(10), 2089–2096. https://doi.org/10.1016/j.buildenv.2009.02.014
  • Ying, L.-C., & Pan, M.-C. (2008). Using adaptive network based fuzzy inference system to forecast regional electricity loads. Energy Conversion and Management, 49(2), 205–211. https://doi.org/10.1016/j.enconman.2007.06.015
  • Zandi, Y., Shariati, M., Marto, A., Wei, X., Karaca, Z., Dao, D. K., … Wakil, K. (2018). Computational investigation of the comparative analysis of cylindrical barns subjected to earthquake. Steel and Composite Structures, 28(4), 439–447.
  • Zhao, Y., Hu, H., Bai, L., Tang, M., Chen, H., & Su, D. (2021). Fragility analyses of bridge structures using the logarithmic piecewise function-based probabilistic seismic demand model. Sustainability, 13(14), 7814. https://doi.org/10.3390/su13147814
  • Zhao, Y., Joseph, A. J. J. M., Zhang, Z., Ma, C., Gul, D., Schellenberg, A., & Hu, N. (2020). Deterministic snap-through buckling and energy trapping in axially-loaded notched strips for compliant building blocks. Smart Materials and Structures, 29(2), 02LT03. https://doi.org/10.1088/1361-665X/ab6486
  • Zhou, G., Moayedi, H., Bahiraei, M., & Lyu, Z. (2020). Employing artificial bee colony and particle swarm techniques for optimizing a neural network in prediction of heating and cooling loads of residential buildings. Journal of Cleaner Production, 254, 120082. https://doi.org/10.1016/j.jclepro.2020.120082
  • Zhu, H., Sun, Q., Tao, J., Sun, H., Chen, Z., Zeng, X., & Soulat, D. (2023). Fluid-structure interaction simulation for performance prediction and design optimization of parafoils. Engineering Applications of Computational Fluid Mechanics, 17(1), 2194359. https://doi.org/10.1080/19942060.2023.2194359