Open Access
ARTICLE
Structural Optimization of a Multi-Story Frame Structure Based on a Pre-Trained Physics-Informed Neural Network (PINN) Surrogate Model
1 Research Center for Wind Engineering and Engineering Vibration, Guangzhou University, Guangzhou, China
2 Central Research Institute of Building and Construction Co., Ltd., MCC Group, Beijing, China
3 China Construction Seventh Engineering Division Corp., Ltd., Zhengzhou, China
4 College of Urban Transportation and Logistics, Shenzhen Technology University, Shenzhen, China
* Corresponding Author: Jiyang Fu. Email:
Computer Modeling in Engineering & Sciences 2026, 147(1), 9 https://doi.org/10.32604/cmes.2026.079375
Received 20 January 2026; Accepted 23 March 2026; Issue published 27 April 2026
Abstract
In structural optimization, data-driven surrogate models are often explored as alternatives to finite element analysis to reduce computational cost. However, conventional neural networks usually fail to capture key structural characteristics and are limited to predicting global responses (e.g., top displacement), but usually fail to achieve accurate internal force predictions with conventional training data volumes. As a result, most existing studies involving surrogate models did not concern internal force constraints. To address this issue, this study proposes a structural optimization framework based on a pre-trained Physics-Informed Neural Network (PINN) surrogate model. By embedding static equilibrium equation into the loss function, the model achieves higher predictive accuracy, particularly for internal forces, while pre-training accelerates convergence and enhances stability. Combined with an improved multi-swarm particle swarm optimization (MPSO) algorithm, the framework enables efficient optimization of multi-story frame structures under internal force and multiple other constraints. The application to a six-story frame structure validates its effectiveness: compared with a DNN-based model, the PINN-based model improves the coefficient of determination for internal force prediction from 0.8874 to 0.9937. These results demonstrate that the proposed method offers a promising approach for efficient optimization of multi-story frame structures.Keywords
In structural design, optimization has become a key approach to enhancing performance and resource utilization efficiency, attracting increasing attention in recent years. Among various methods, deterministic optimization approaches are widely applied in structural sizing and shape optimization due to their advantages such as fast convergence, high computational efficiency, and reproducible results [1–3]. The Optimality Criteria Method (OC), as a representative technique, achieves objective function minimization by constructing the Lagrangian function and iteratively updating design variables under constraints [4–6]. However, due to its dependence on the explicit and differentiable expressions of objective and constraint functions, it faces significant limitations when dealing with implicit constraints, discrete variables, or complex material behaviors.
To overcome these limitations, researchers have introduced intelligent optimization algorithms such as Particle Swarm Optimization (PSO). Inspired by swarm intelligence in nature, these algorithms do not rely on gradient information and exhibit strong global search capabilities, as well as the ability to handle complex nonlinear constraints. PSO has been widely applied in the optimization of truss structures, wind-resistant design of high-rise buildings, and other fields [7–10]. PSO was originally proposed by Kennedy and Eberhart in 1995. Its core concept is to simulate particles updating their velocities and positions based on individual and collective experience in the search space to find the optimal solution [11].
In recent years, PSO has been continuously improved to enhance its performance in complex structural optimization. For instance, Perez and Behdinan (2007) proposed a modified PSO suitable for constrained structural design and validated its effectiveness [12]; Gao and Qu (2011) introduced a simulated annealing mechanism and proposed the multi-objective perturbed particle swarm optimization (MPSOD) to improve population diversity and global exploration ability [13]; Zhou et al. (2011) incorporated perturbation and jumping mechanisms to propose the RPPSO algorithm for enhanced search stability [14]; Jansen and Perez (2011) integrated the augmented Lagrangian method into PSO to form ALPSO, targeting complex constraint handling [15]. Building upon these advances, multi-swarm strategies have been employed to further mitigate the premature convergence tendency of standard PSO. In particular, Zhao et al. proposed a dynamic multi-swarm PSO (DMS-PSO), which partitions the overall swarm into multiple small sub-swarms and enables inter-swarm information exchange through periodic regrouping, thereby maintaining population diversity while improving the search behavior and convergence characteristics on complex multimodal problems; this mechanism provides methodological support for the multi-swarm PSO (MPSO) framework and the inter-swarm interaction strategy adopted in this study [16]. More recently, Shan et al. (2024) proposed a two-stage automatic design method by combining CAD drawing recognition and structural parameter modeling, and integrated expert knowledge into MPSO for multi-objective optimization of steel frame structure weight and safety [17].
Despite the strong capabilities of intelligent optimization algorithms, their structural response evaluations still heavily rely on finite element analysis (FEM). In large-scale structures or problems with complex constraints, frequent FEM calls significantly increase computational costs, and the optimization process may take hours or even days, greatly limiting its engineering efficiency. To address this issue, surrogate models have been widely introduced to replace FEM for fast structural response prediction, thereby significantly reducing computational expense and improving overall efficiency.
Among various surrogate modeling techniques, neural networks have become mainstream tools for structural response prediction due to their powerful nonlinear fitting ability and end-to-end learning mechanism [18–20]. By learning the mapping between design parameters and structural responses, neural networks can achieve millisecond-level prediction. Existing studies have applied artificial neural networks (ANNs) to predict natural frequencies [21], inter-story drift ratios [22,23], as well as nodal displacements and stresses [24]; convolutional neural networks (CNNs) have been used to predict material properties [25] and compliance in topology optimization [26]; deep neural networks (DNNs) have been utilized to predict wind load distribution [27], nodal displacements [28], and optimal cross-sections [29]; backpropagation neural networks (BPNNs) have been adopted to predict static displacements of structures [30]. In addition, support vector machines (SVMs) have also been applied to model hysteretic responses of reinforced concrete columns under seismic loading [31]. In addition to approximating structural responses in a regression manner, surrogate models in structural optimization are also commonly formulated as feasibility classifiers to distinguish feasible from infeasible designs. By combining boundary identification with virtual sampling, they can efficiently delineate the feasible-region boundary, thereby reducing the number of expensive FEM evaluations [32]. Moreover, to address the accuracy degradation commonly observed in high-dimensional structural optimization, recent advances have introduced graph neural network (GNN) surrogates that explicitly incorporate structural topological features and variable correlations, showing improved performance compared with conventional ML surrogates [33].
It is worth noting that most existing data-driven surrogate models, such as DNNs, rarely focus on the prediction of internal forces of structural members. This is mainly because internal forces are typically derived from nodal displacements and stiffness matrices, and even small errors in displacement prediction can be significantly amplified in internal force calculations. This issue will be illustrated in detail through a numerical example later in this paper. Due to limited accuracy in displacement prediction, data-driven models like DNNs struggle to achieve reliable internal force estimation.
To address this deficiency, Physics-Informed Neural Networks (PINNs) have emerged in recent years. By embedding governing equations and boundary conditions into the loss function, PINNs unify structural response prediction and physical consistency. They have been widely applied in solid mechanics, fluid dynamics, and wind field modeling, enabling high-accuracy modeling of displacements, stresses, velocities, and pressure fields [34–37]. Recent studies have shown the successful application of PINNs in tasks such as geotechnical parameter inversion [38], microscale stress field reconstruction [39], and parameter identification of beam structures [40].
To tackle issues such as slow convergence and boundary condition difficulties in PINNs, Xiong et al. (2025) proposed integrating PINNs with the finite element method (FEM), developing a Deep Finite Element Method (DFEM) to enhance training stability and engineering adaptability [41]. Although PINNs were originally developed to solve partial differential equations (PDEs), they are now demonstrating promising potential for constructing efficient surrogate models in structural optimization. For example, Zhu et al. (2025) combined PINNs with multi-objective evolutionary algorithms to build a physics-consistent surrogate framework for the prestress optimization of cantilever dome structures, significantly improving optimization efficiency while maintaining physical fidelity [42]. The application of PINN-based surrogate models in structural optimization remains relatively limited. Furthermore, when trained from scratch, PINNs may suffer from slow and unstable convergence due to the competing objectives of data fitting and physics enforcement, which poses a barrier to practical use.
To address the aforementioned challenges, this paper proposes a structural design optimization method for a multi-story frame structure based on a pre-trained PINN surrogate model. Through a case study of a multi-story frame structure, the PINN-based surrogate model demonstrated significantly higher accuracy in predicting internal forces compared to the DNN-based model. Additionally, the incorporation of a pre-training strategy substantially improved the convergence speed of the PINN model, resulting in both high prediction accuracy and computational efficiency. Furthermore, the structure was optimized using the Modified Particle Swarm Optimization (MPSO) method, verifying the feasibility and effectiveness of the proposed approach.
2 Introduction to the Optimization Problem
To enhance readability, this section presents a detailed case study of a multi-story frame structure, elaborating on the optimization design variables, objective function, constraint conditions.
A six-story multi-story frame structure is considered as the engineering case here, as shown in Fig. 1. The building has six stories and includes two typical story types: stories 1–3 form the first typical story, and stories 4–6 form the second. Each story has a height of 3000 mm. There are three spans in both the X and Y directions, with a span length of 5000 mm in the X direction and 4000 mm in the Y direction.

Figure 1: Three-dimensional finite element model.
The structure is assumed to be constructed using C25 concrete. The density of concrete is taken as 2500 kg/m3, and its elastic modulus is 25.5 GPa. The gravity load (RG) includes a dead load of 5 kN/m2 and a live load of 2 kN/m2, both converted into equivalent nodal loads. The seismic load is applied as horizontal nodal forces (EX) using the equivalent base shear method. To better reflect practical design conditions, the most unfavorable structural response under load combinations is considered according to the design code requirements. Specifically, the load combination used is:
According to the Code for Design of Concrete Structures (GB 50010-2010) [43] and the Code for Seismic Design of Buildings (GB 50011-2010) [44], the optimization problem is defined as a discrete variable optimization problem. The optimization process considers seismic performance, ultimate bearing capacity, and material efficiency. Multiple constraints are imposed at both the structural and component levels to ensure the overall safety and rationality of the design. The design variables, objective function, and constraints are elaborated as follows.
In the optimization design of a multi-story frame structure, the primary load-bearing components include beams and columns. To ensure uniform modeling and facilitate construction, all structural members in this study adopt rectangular cross-sections. The beams and columns are grouped into several section groups. Members within the same group share identical cross-sectional dimensions during the optimization process. As shown in Fig. 2, each typical story includes edge beams (gray members in Fig. 2), interior beams (green), corner columns (red), side columns (purple), and interior columns (blue), resulting in five section groups per typical story and ten section groups in total for the entire structure. Each section group includes two design variables: section height and section width. Therefore, the optimization problem involves a total of 20 design variables.

Figure 2: Component layout of a typical story.
The heights and widths of beam and column sections are both restricted to values between 200 and 800 mm, with an increment of 50 mm, as shown in Table 1. This discretization approach ensures the practical implementability of the optimization problem and better aligns with the standardized component selection logic commonly used in engineering design processes.

For beam members, in order to meet construction standards and architectural functional requirements, a height-to-width ratio constraint is imposed for all beams in the structure, as shown in Eq. (1).
where
Table 2 lists all the structural constraints considered in the optimization process of this study. These constraints cover both the global structural performance (structure-level) and the local component performance (member-level), ensuring that the optimized results satisfy the requirements for load-bearing capacity and stability, while also exhibiting good seismic performance and constructability.

Among the structure-level constraints, the inter-story drift ratio (IDR) is a critical indicator for evaluating the deformation capacity and overall stiffness of a structure under seismic loading. The constraint function for the inter-story drift ratio is defined in Eq. (2).
In Eq. (2),
For the member-level constraints, this study focuses on the mechanical performance of two key components: columns and beams. Specifically, the axial compression ratio limit for columns (as defined in Eq. (3)) and the flexural strength check of beam cross-sections (as defined in Eq. (4)) are introduced as constraint conditions. These are implemented to ensure the safety and reliability of structural members under seismic and gravity loading conditions.
Eq. (3) defines the axial compression ratio constraint for concrete column members, which limits the axial force in a column to no more than 75% of its axial compressive capacity. In this equation,
In structural optimization, the total mass of a structure is a key indicator of its economic performance. This study aims to minimize the total mass of the reinforced concrete frame structure without violating the specified structural constraints.
Eq. (5) defines the calculation of the total mass of the structure. In the equation,
3 The PINN surrogate Model and the Pre-Training Strategy
In this case study, a dataset comprising 2000 samples was generated by randomly sampling within the prescribed ranges of structural design parameters, and the corresponding structural responses were computed using the finite element software SAP2000 to form the input–output pairs for neural-network training. The dataset was then split in a 9:1 ratio, with 1800 samples used for model training and the remaining 200 samples reserved for model evaluation. Unlike conventional purely data-driven networks, the Physics-Informed Neural Network (PINN) incorporates physics-based constraints (e.g., equilibrium equations) into the loss function, enforcing the satisfaction of governing equations and improving both generalization and predictive accuracy. Based on this dataset, a pre-trained PINN surrogate model was developed to establish a mapping from structural design parameters (e.g., cross-sectional height and width) to structural responses (e.g., nodal displacements), enabling rapid and accurate prediction without performing conventional finite element analyses. Consequently, the pre-trained PINN substantially reduces the per-iteration computational cost in structural optimization and provides essential response information for intelligent optimization algorithms.
PINNs incorporate both data loss and physics loss into the loss function, enabling the network to not only fit the observed data but also satisfy specific physical governing equations, such as static equilibrium equations, during the training process. The proper formulation of the loss function is central to achieving high prediction accuracy and ensuring physical consistency in PINNs.
The total loss function of the PINN is expressed as a weighted combination of the data loss and the physics loss, as defined in Eq. (6).
In Eq. (6),
The data loss term is designed to ensure that the structural responses predicted by the neural network are consistent with the results from finite element simulations (FEM). It is defined as the mean squared error (MSE) between the predicted displacements and the reference displacements, as given in Eq. (7).
In Eq. (7),
The physics loss term enforces the network output to satisfy the fundamental governing equations of structural mechanics, thereby enhancing the physical consistency and generalization ability of the network. In this study, the considered physical constraint is the static equilibrium equation, as defined in Eq. (8).
In Eq. (8),
The physics residual is defined as the sum of squared errors of the equilibrium equation described above, as given in Eq. (9). By introducing the residual of the equilibrium equation, the network can be guided to learn physically consistent patterns even in the absence of sufficient data coverage, thereby improving the generalization capability of the surrogate model.
The weighting of data and physics terms in the loss function has a significant impact on the training performance of PINNs. In this study, a dynamic self-adaptive weighting strategy is adopted to ensure a balanced trade-off between data fitting accuracy and physical consistency. This strategy maintains the data loss and physics loss at the same order of magnitude throughout the training process, each contributing approximately 50% to the total loss.
Specifically, the weight coefficients are initialized as
3.2 Architecture of the PINN Model
To enable efficient prediction of structural responses while satisfying physical constraints, this study constructs a Physics-Informed Neural Network (PINN) surrogate model based on a multilayer feedforward neural network (MLP) and the overall structure of the PINN model is shown in Fig. 3. The network architecture is specifically optimized in terms of input and output dimensions, hidden layer configuration, normalization procedures, and training strategies, in order to accommodate the highly nonlinear mapping between structural parameters and structural responses.

Figure 3: PINN network architecture diagram.
The input of the PINN model consists of the structural design parameters, including the heights and widths of 10 groups of frame sections. For the six-story multi-story frame structure studied in this work, a total of 20 input variables are defined and denoted as x. The network output corresponds to the structural responses under a given loading condition, including six degrees of freedom for each node. Since the structure contains 96 nodes, the output consists of 576 nodal displacements, which serve as the predicted responses of the surrogate model.
To improve the convergence efficiency of neural network training and enhance the generalization ability of structural response prediction, a two-stage pre-training strategy is adopted in this study. This strategy effectively combines the rapid convergence characteristics of data-driven learning with the accuracy improvements offered by physics-informed constraints, thereby improving the training quality and stability of the Physics-Informed Neural Network (PINN).
In the first stage—data pre-training—the network is trained solely based on the data loss, which minimizes the mean squared error (MSE) between the finite element simulation results and the network predictions. The objective of this stage is to guide the network toward a reasonable initial solution region in the parameter space, thus laying a solid foundation for the subsequent joint training. Pre-training in this phase helps avoid issues such as gradient oscillation or optimization stagnation that may arise when physical constraints are introduced too early.
In the second-stage joint training, the physics-residual loss (e.g., the residual term derived from the force-equilibrium equation) is progressively introduced on top of the data loss, and the overall loss formulation still follows Eq. (6) in Section 3.1. It should be emphasized that, in this joint-training stage, the weight of the physics loss is fixed to 1, and only the weight of the data loss is adaptively adjusted. Specifically, every 100 epochs, the data-term weight is updated according to the relative magnitudes of the data loss and the physics loss in the current mini-batch, so that the weighted contributions of the two terms to the total loss are kept as comparable as possible, thereby achieving a dynamic balance in which the data term and the physics term each account for approximately 50% of the total loss.
At the early stage of training, the physics-residual loss is often significantly larger than the data loss. If the weights were kept unchanged, the physics term could dominate backpropagation for a prolonged period, leading to gradient oscillations, convergence difficulties, or even model performance degradation. By fixing the physics-term weight while adaptively tuning the data-term weight, the model can enhance physical consistency progressively while maintaining data-fitting accuracy, preventing any single loss term from excessively dictating the optimization direction. As training proceeds and the two losses gradually approach the same order of magnitude, the data-term weight correspondingly stabilizes, making the joint-training process smoother and the convergence more stable.
Moreover, compared with directly training the PINN from the beginning using the full loss function (with both data and physics terms activated), the proposed two-stage pre-training strategy not only improves training efficiency but also provides a smoother convergence path; the comparative results will be presented later.
3.3 Hyperparameter Analysis of the PINN Model
The PINN model in this study consists of an input layer, multiple hidden layers, and an output layer. The number of hidden layers (denoted as La) and the number of neurons per layer (denoted as Nu) have a significant impact on the training efficiency, prediction accuracy, and convergence stability of the model. These two parameters are key factors that determine the overall performance of the surrogate model.
If the network architecture is overly complex, it may achieve high accuracy on the training set but is prone to overfitting, leading to poor generalization performance on unseen samples. Conversely, if the network architecture is too simple, the model may lack sufficient capacity to capture the complex nonlinear relationships in structural responses, resulting in reduced prediction accuracy. To determine a suitable network configuration, a series of hyperparameter sensitivity analyses were conducted in this study, comparing model performance under different combinations of hidden layer counts and neuron numbers per layer. The results show that increasing the number of neurons from 700 to 1100 (see Fig. 4a) significantly improves convergence speed and reduces the final loss. However, when the number of neurons is further increased to 1100, the model does not converge better and may even show degraded training performance. Similarly, when the number of hidden layers increases from 2 to 6 (see Fig. 4b), the training performance consistently improves. Nevertheless, increasing to 6 layers leads to a slight decline in convergence speed and final accuracy, which may be attributed to overfitting or issues such as vanishing gradients. Considering the trade-offs among convergence speed, prediction accuracy, and model complexity, the configuration with

Figure 4: Impact of network architecture on loss function convergence.
The activation function plays a critical role in neural networks by introducing nonlinearity, enabling the model to learn and approximate complex nonlinear mappings. In this study, to determine the most suitable activation function for structural response prediction, a comparative analysis was conducted among three commonly used activation functions: Sigmoid, ReLU, and Tanh. Their performance under identical network architectures and training conditions was evaluated, as illustrated in Fig. 5.

Figure 5: Comparison of loss convergence behavior using different activation functions.
As shown in Fig. 5a, the Sigmoid activation function exhibits a clear gradient saturation phenomenon during training, with the loss remaining at a high level and fluctuating over a long period without significant reduction, indicating a serious vanishing gradient issue. The ReLU function shows faster convergence in the early stages, but its loss ultimately stagnates at a relatively high level. In contrast, although the Tanh function converges more slowly at the beginning, it undergoes a sharp decrease after approximately 1500 epochs and achieves the lowest final loss, demonstrating superior fitting performance. Fig. 5b presents a zoomed-in view of the later training phase, where it is evident that ReLU suffers from large fluctuations in loss, whereas Tanh exhibits better smoothness and numerical stability. Therefore, to enhance convergence and prediction accuracy, the Tanh activation function is adopted in this study.
3.4 Pre-Training vs. Non-Pre-Training Strategy
To evaluate the effectiveness of the two-stage training strategy, we compare the training performance of the PINN model with and without pre-training.
For the model with pre-training, the second-stage training loss curves exhibit smooth and stable convergence. As shown in Fig. 6a, the data loss remains at a very low level throughout training, indicating that the network preserves the strong predictive capability obtained from the first-stage training and maintains high fidelity to the finite element data. Meanwhile, the physics loss starts from a relatively higher value but decreases steadily as training proceeds, reflecting the progressive enforcement of the equilibrium constraints. The zoomed-in view in Fig. 6b further confirms that, during the later stage (e.g., 10,000–15,000 epochs), the physics loss continues to decline monotonically with only minor fluctuations, while the data loss stays nearly unchanged. This demonstrates that the second-stage training primarily improves physical consistency without sacrificing data fitting accuracy. Overall, these results validate the effectiveness of the two-stage strategy, where pre-training provides a well-initialized model state and the second stage successfully bridges data fidelity and physical constraints, leading to enhanced stability and generalization.

Figure 6: Loss curves during the second stage of training.
In contrast, the PINN model trained without pre-training shows markedly different behavior. As depicted in Fig. 7a, the physics loss exhibits an extremely large initial spike, followed by a rapid drop to a lower level, indicating unstable optimization at the early stage when the model attempts to satisfy the physical equations from a random initialization. However, the zoomed-in plot in Fig. 7b reveals that the physics loss remains highly fluctuating in the later stage (15,000–20,000 epochs), suggesting persistent instability and difficulty in achieving consistent physical enforcement. Meanwhile, the data loss stays at a relatively high level and decreases only slowly over time, implying limited improvement in fitting performance and low optimization efficiency. Such oscillatory behavior indicates conflicting gradients between the data and physics objectives, which prevents the model from reaching a stable balance and makes convergence difficult.

Figure 7: Training loss curves of the PINN model without pre-training.
Further evaluation of the model without pre-training after 20,000 epochs on the test set yields a negative R2, indicating that the model fails to capture the distribution of nodal displacements and is therefore unsuitable as a structural response surrogate. In summary, the pre-training strategy significantly improves convergence smoothness, training stability, and physical consistency, and effectively mitigates the severe oscillations observed when training the PINN from scratch. Therefore, the subsequent experiments adopt the pre-training strategy for the PINN surrogate model.
3.5 Performance Comparison between PINN and DNN
Figs. 8–17 compare the predicted and ground-truth displacements for several representative samples selected from the test subset of the dataset. In each figure, subfigure (a) reports the results of the two-stage pre-trained PINN surrogate (5000 epochs of pre-training followed by 1500 epochs of PINN training), whereas subfigure (b) shows the results of the DNN surrogate trained for 20,000 epochs until full convergence. Notably, subfigures (a) and (b) are evaluated on the same test samples, and the test set is used only for performance evaluation and is never involved in model training.

Figure 8: Sample 1: Displacement prediction vs. FEM.

Figure 9: Sample 2: Displacement prediction vs. FEM.

Figure 10: Sample 3: Displacement prediction vs. FEM.

Figure 11: Sample 4: Displacement prediction vs. FEM.

Figure 12: Sample 5: Displacement prediction vs. FEM.

Figure 13: Sample 6: Displacement prediction vs. FEM.

Figure 14: Sample 7: Displacement prediction vs. FEM.

Figure 15: Sample 8: Displacement prediction vs. FEM.

Figure 16: Sample 9: Displacement prediction vs. FEM.

Figure 17: Sample 10: Displacement prediction vs. FEM.
Both the PINN and DNN models achieved data losses on the order of 10−3 on the training dataset. The average coefficient of determination (R2) on the test set was 0.9983 for the PINN model and 0.9948 for the DNN model. As can be seen from the visual comparison, both models exhibit excellent fitting performance on the test samples, with the predicted displacement curves closely matching the ground truth.
Further comparison of the data loss and mean squared error (MSE) metrics shows that both models reach similarly low error levels on the test set, indicating strong predictive capabilities. Although the PINN model achieves a slightly higher R2 value for displacement prediction, the improvement is not significant. However, as will be shown in the subsequent internal force prediction results, the difference in performance between the two models becomes much more pronounced.
After predicting the nodal displacements using the surrogate model, the member end forces can be calculated based on the displacement results. The detailed procedure is as follows.
1. Predict the displacements of all structural nodes using the surrogate model;
2. Read the element connectivity (topology) data to establish the correspondence between nodes and elements;
3. Construct the stiffness matrix of each member in the local coordinate system;
4. Transform the local stiffness matrices to the global coordinate system using direction cosines;
5. Assemble the global displacement vector of the entire structure;
6. Compute the member end forces in the global coordinate system using the equation Fe = KeUe.
In this process, the element stiffness matrix for each member is based on the spatial beam element stiffness formulation, as given in Eq. (10).
In Eq. (10), Ke represents the element stiffness matrix of a three-dimensional spatial beam element in the local coordinate system. E is the elastic modulus, A is the cross-sectional area, and L is the length of the member. Iy and Iz are the moments of inertia about the local y- and z-axes, respectively, and J is the torsional constant of the cross-section, representing the member’s torsional stiffness. This matrix accounts for axial tension/compression, bending in two directions, and torsion, and serves as the foundation for finite element stiffness assembly and internal force computation.
As shown in Figs. 18–27, although the DNN and PINN models exhibit similar levels of error in nodal displacement predictions, there is a noticeable difference in the accuracy of the computed internal forces. This discrepancy arises because internal forces are derived from the displacements using the equation Fe = KeUe. In this process, small errors in displacement can be amplified by the stiffness matrix, leading to significantly larger errors in internal force predictions.

Figure 18: Comparison of calculated internal forces for Sample 1.

Figure 19: Comparison of calculated internal forces for Sample 2.

Figure 20: Comparison of calculated internal forces for Sample 3.

Figure 21: Comparison of calculated internal forces for Sample 4.

Figure 22: Comparison of calculated internal forces for Sample 5.

Figure 23: Comparison of calculated internal forces for Sample 6.

Figure 24: Comparison of calculated internal forces for Sample 7.

Figure 25: Comparison of calculated internal forces for Sample 8.

Figure 26: Comparison of calculated internal forces for Sample 9.

Figure 27: Comparison of calculated internal forces for Sample 10.
According to the theory of error amplification, linear transformations such as matrix multiplication with Ke can magnify input errors, especially when the matrix has a poorly conditioned eigenvalue distribution. Compared with the DNN, the PINN incorporates a physics-based loss term during training (i.e., equilibrium equation constraint), which effectively suppresses the accumulation and amplification of errors. As a result, the PINN model demonstrates higher accuracy and greater stability in internal force calculations.
The internal forces can be calculated using the linear relationship Fe = KeUe, where Fe is the member end force vector, Ke is the element stiffness matrix, and Ue is the nodal displacement vector of the element. If the predicted displacement contains an error, denoted as
It can be seen that the internal force error is:
According to matrix norm theory, we have:
This indicates that even if the displacement prediction error
As shown in Figs. 18–27, the internal forces computed from displacements predicted by the pre-trained PINN surrogate model are significantly more accurate than those from the DNN model. The coefficient of determination (R2) for all ten test samples exceeds 0.99, indicating that the PINN model delivers excellent accuracy and stability in internal force prediction. Such outstanding predictive performance lays a solid foundation for the accuracy and convergence efficiency of the subsequent structural optimization algorithm.
4 Optimization Framework Based on the Pre-Trained PINN Surrogate Model
To achieve efficient optimization of a multi-story frame structure under multiple performance constraints, this study proposes an integrated structural optimization framework that combines a pre-trained Physics-Informed Neural Network (PINN) surrogate model with a Multi-Swarm Particle Swarm Optimization (MPSO) algorithm. The framework leverages the pre-trained PINN model to replace conventional finite element analysis, enabling rapid prediction of structural responses, while the MPSO algorithm explores the complex design space to identify optimal structural parameter configurations, thereby significantly enhancing the overall optimization efficiency.
4.1 Multi-Swarm Particle Swarm Optimization (MPSO) Algorithm
In the traditional Particle Swarm Optimization (PSO) algorithm, all particles share the same global best information, making the population prone to premature convergence and easily trapped in local optima. Additionally, the classical update formulas are primarily designed for continuous variables, which are not well-suited for the optimization of a multi-story frame structure involving a large number of discrete section dimensions and multiple constraints. To address these issues, this study adopts the Multi-Swarm Particle Swarm Optimization (MPSO) approach, whose core improvements can be summarized in the following four aspects.
1. Multi-swarm collaboration mechanism: All particles are randomly divided into multiple sub-swarms, each performing independent local searches. At fixed generational intervals, the sub-swarms exchange their best-found information to enhance the overall global search capability. This strategy helps maintain population diversity, reduces the likelihood of being trapped in local optima, and significantly improves the ability to escape local extrema.
2. Adaptive inertia weight adjustment: A linearly decreasing inertia weight is introduced (e.g., from 0.8 down to 0.5) to enhance global exploration in the early stages of iteration and promote local convergence in the later stages. This approach avoids reliance on manually tuned parameters and improves the adaptability and stability of the algorithm.
3. Discrete index encoding strategy: For discrete structural design problems, particle positions are represented using “size indices” instead of continuous real-valued variables. After each particle update, a round + clip operation is applied to ensure that the position remains within the feasible index grid. This guarantees the validity of design variables and prevents information loss or accuracy degradation caused by discretization errors.
4. Penalty function and constraint-handling integration mechanism: By incorporating structural performance indicators such as inter-story drift ratio, column axial compression ratio, and beam moment ratio, along with the count of flexural capacity violations, a composite penalty weight is constructed. This dynamic adjustment to the objective function penalizes design solutions that violate constraints, effectively preventing infeasible solutions from participating in the search process. In doing so, the framework unifies constraint handling and objective optimization for improved efficiency.
Eqs. (14)–(16) describe the fundamental rules for updating particle positions and velocities in the MPSO algorithm. Specifically, they include the velocity update guided by individual and sub-swarm best positions (Eq. (14)), the discrete index-based position update strategy (Eq. (15)), and the inertia weight adjustment strategy that gradually decays over iterations (Eq. (16)).
In Eq. (14),
Eq. (15) defines the particle’s position update rule. The use of
Eq. (16) defines the linearly decreasing inertia weight function. At
To account for multiple constraints at both the structural and member levels, an external penalty method is adopted to transform the constrained optimization problem into an unconstrained one. The optimization is then carried out by constructing an objective function with penalty terms, as defined in Eq. (17).
In the equation,
The violation of each constraint is defined by the indicator function
4.2 Workflow of the Optimization Method
The optimization framework based on the pre-trained PINN surrogate model is illustrated in Fig. 28. The iteration process terminates when the number of iterations reaches

Figure 28: Optimization framework based on the pre-trained PINN surrogate model.
This section applies the proposed optimization framework to the optimized design of the multi-story frame structure introduced in Section 2, in order to validate the prediction stability and optimization performance of the pre-trained PINN-based surrogate model. Building on this, the method is further extended to a ten-story frame structure to examine its applicability and robustness under different structural scales.
4.3.1 Optimization Case Study of a Multi-Story Frame Structure
In this optimization, the total number of MPSO iterations is set to 120. The individual learning factor
As illustrated in Fig. 29, the optimization process exhibits rapid convergence in the initial stage and then gradually stabilizes. The total objective value decreases from 230 to 144 t, corresponding to an approximate 37% reduction in concrete volume. In terms of computational efficiency, the proposed optimization framework based on a PINN surrogate model requires a total of 4617 s (4390 s for surrogate-model training and only 227 s for the optimization loop). By contrast, using the same MPSO algorithm but evaluating structural responses via conventional finite element analysis is estimated to take 14,621 s. Therefore, the proposed method significantly outperforms the conventional approach, achieving a 68.42% reduction in computational time.

Figure 29: Objective function iteration curve.
From Fig. 30, the inter-story drift ratio gradually approaches the limit value of 1.0 and stabilizes near it, indicating a reasonable stiffness distribution across the structure. Fig. 31 shows that the column axial compression ratio fluctuates in the early stage but eventually converges below the limit, confirming that the structural load-bearing capacity meets safety requirements. In Fig. 32, the flexural moment ratio of the beams remains within the range of 0.25–0.38, well below the limit of 1.0, suggesting ample flexural capacity and no significant risk of exceeding the allowable moment. This provides a useful preliminary reference for concrete structural design, and future work can further optimize reinforcement detailing.

Figure 30: Inter-story drift ratio monitoring curve.

Figure 31: Axial compression ratio monitoring curve.

Figure 32: Flexural capacity monitoring curve.
Finally, the optimized results were verified using the finite element software SAP2000. The inter-story drift angles of the structure, as shown in Fig. 33, indicate that slight violations of constraints may occur when responses are near their limits due to these tiny prediction errors. It should be noted that this study focuses on the preliminary design phase, where optimization is restricted to member sectional dimensions. At this stage, reinforcement is only accounted for using the minimum reinforcement ratio, and its full contribution to structural stiffness is not yet incorporated. These minor exceedances can be effectively resolved and avoided during the detailed design stage (construction drawing phase), where actual reinforcement detailing and final verification are performed.

Figure 33: Inter-story drift angles of the optimized structure.
The overall structural stiffness is reasonable and meets deformation performance requirements. The axial compression ratios of the columns, shown in Fig. 34, reveal that although Column 43 has a relatively high ratio, it does not exceed the limit, and all columns comply with the code requirements. Lastly, the optimized results were applied in finite element calculations to compare the actual internal forces with the predicted values. As shown in Fig. 35, the coefficient of determination (R2) for the optimal solution reaches 0.9910, indicating that the final predicted results are reliable. In addition, Table 3 reports a component-wise comparison between the FEM-computed internal forces and the surrogate-predicted values for several representative members in the optimized design. Since the structure contains a large number of members, only selected results are presented for brevity; the complete set of comparison data is available from the authors upon request (via email).

Figure 34: Axial compression ratios of the optimized structure.

Figure 35: Comparison of predicted and actual internal forces for the optimized result.

4.3.2 Optimization Case Study of a Ten-Story Frame Structure
In addition to the six-story frame structure presented in the main text, this study further verifies the scalability and transferability of the proposed method through an optimization case of a ten-story reinforced concrete frame structure. The three-dimensional model of the structure is shown in Fig. 36, where Stories 1–3 constitute the first typical segment, Stories 4–6 the second segment, and Stories 7–10 the third segment. The same cross-section grouping strategy as that used in the main text is adopted for each segment: members are categorized into five groups of standard sections, resulting in a total of 15 section groups to be optimized. The optimization process still employs the same multi-swarm particle swarm optimization (MPSO) algorithm and is coupled with the pre-trained PINN surrogate model for rapid structural response evaluation. Meanwhile, to ensure a consistent basis for comparison, the construction of the surrogate-model training dataset follows the same sampling strategy and uses the same number of samples as in the six-story case. It should be noted that a different loading condition is considered in this appendix case; the load combination adopted is 1.2RG + 1.3WINDX.

Figure 36: Three-dimensional finite element model.
The total objective value decreases from 844 to 749 t, i.e., a reduction of 95 t, which corresponds to an approximate 11.26% reduction in concrete volume (or consumption). In terms of computational efficiency, the proposed optimization framework based on the PINN surrogate model requires about 5339 s in total (4987 s for surrogate-model training and only 352 s for the optimization loop). By contrast, using the same MPSO algorithm while evaluating structural responses via conventional finite element analysis is estimated to take 19,860 s. Therefore, the proposed method significantly outperforms the traditional approach in efficiency, reducing the computational time by approximately 73.12%.
As shown in Fig. 37, the optimization process converges rapidly at the early stage: the objective value drops sharply with iterations and then gradually levels off and stabilizes, indicating that the proposed workflow can quickly locate a near-optimal search region and achieve stable convergence. The constraint-monitoring results further demonstrate (see Fig. 38) that the inter-story drift ratio progressively approaches the limit value of 1.0 and remains stable near it in the later iterations, suggesting a reasonable distribution of lateral stiffness and confirming that the inter-story drift constraint is the governing (active) constraint in this case.

Figure 37: Objective function iteration curve.

Figure 38: Inter-story drift ratio monitoring curve.
In contrast, the column axial compression ratio and the beam flexural-capacity ratio shown in Figs. 39 and 40 remain well below their respective limits and stay stable throughout the iterations, indicating that strength-related constraints are not controlling factors in this example. Overall, the optimized design is primarily governed by the lateral deformation constraint, while the strength constraints retain considerable margins; future work may further improve the design economy by refining reinforcement detailing under the premise of satisfying the drift constraint.

Figure 39: Axial compression ratio monitoring curve.

Figure 40: Flexural capacity monitoring curve.
Finally, the optimal design was substituted into the finite element model to verify both the constraint satisfaction and the prediction accuracy of the surrogate model. As shown in Fig. 41, the inter-story drift ratios of all stories are generally close to the limit value; slight exceedances are observed at Stories 4, 7, and 8, with values of 1.012, 1.014, and 1.006, respectively (the maximum exceedance is approximately 1.36%), while the remaining stories satisfy the limit. Meanwhile, as illustrated in Fig. 42, the PINN model shows good agreement with the FEM results in internal-force prediction, achieving an R2 value of 0.9918, which indicates high reliability of the surrogate-predicted internal forces at the engineering-quantity level. Furthermore, the slight drift exceedance together with the remaining margins in strength-related indices suggests that the optimized design is primarily governed by lateral deformation constraints. Future work can further refine reinforcement detailing and structural detailing to enhance both structural safety and economy.

Figure 41: Inter-story drift angles of the optimized structure.

Figure 42: Comparison of predicted and actual internal forces for the optimized result.
This paper proposes an optimization design framework for a multi-story frame structure based on a pre-trained Physics-Informed Neural Network (PINN) surrogate model combined with a Multi-Swarm Particle Swarm Optimization (MPSO) algorithm. Unlike widely used data-driven surrogate models such as Deep Neural Networks (DNNs), the proposed PINN model incorporates equilibrium equations into the network architecture and jointly considers data loss and physics-based loss. As a result, it achieves higher accuracy in predicting structural nodal displacements compared to DNNs. In particular, the PINN model shows significant advantages in predicting internal forces: in the same test dataset, the average coefficient of determination (R2) for internal force prediction is only 0.8874 for the DNN model, whereas the PINN model achieves an R2 value as high as 0.9937. This demonstrates that the PINN surrogate model can effectively replace finite element models for structural optimization. The case study further indicates that the adoption of pre-training significantly improves the convergence speed and stability of the PINN model. When integrated with the MPSO algorithm, the framework can rapidly produce optimized results, making it highly applicable for engineering practice. This method offers a promising new approach for fast and accurate structural optimization design of multi-story frame structures.
Surrogate models may exhibit marginal discrepancies in structural response predictions compared to direct FEA. When responses are near their threshold limits, these small prediction residuals can lead to minor constraint violations—for instance, the inter-story drift ratio in our case exceeded the limit by only 0.5%. Firstly, the PINN-based surrogate provides significantly higher fidelity than conventional DNN models, thereby keeping such exceedances to a minimum. Furthermore, the current optimization of member sections for high-rise frames belongs to the preliminary design stage, where reinforcement is accounted for only via the minimum reinforcement ratio. The stiffening effect of actual reinforcement is not yet incorporated. In the subsequent construction drawing stage, the additional stiffness provided by actual reinforcement will, in most scenarios, easily compensate for these marginal violations. Final design verification will further ensure that all constraints are strictly satisfied before implementation.
Acknowledgement: The authors gratefully acknowledge the financial support from the National Natural Science Foundation of China (52538010) and the Guangzhou Municipal Education Bureau’s Scientific Research Project (2024312217).
Funding Statement: The work described in this paper is fully supported by grants from the National Natural Science Foundation of China (52538010) and the Guangzhou Municipal Education Bureau’s Scientific Research Project, China (2024312217). The financial support is gratefully acknowledged.
Author Contributions: An Xu: Conceptualization, Supervision, Funding acquisition, Project administration, Writing—review & editing. Zhixiong Liu: Methodology, Software, Investigation, Data curation, Writing—original draft, Writing—review & editing. Hua Rong: Visualization, Validation. Liang Han: Supervision, Funding acquisition, Writing—review & editing. Wei Shi: Validation, Resources, Writing—review & editing. Jun Huang: Project administration, Resources, Writing—review & editing. Jiyang Fu: Funding acquisition, Resources, Writing—review & editing. All authors reviewed and approved the final version of the manuscript.
Availability of Data and Materials: The code of this paper will be available on request at xuan@gzhu.edu.cn (An Xu) after formal publication.
Ethics Approval: Not applicable.
Conflicts of Interest: The authors declare no conflicts of interest.
References
1. Kim NH, Dong T, Weinberg D, Dalidd J. Generalized optimality criteria method for topology optimization. Appl Sci. 2021;11(7):3175. doi:10.3390/app11073175. [Google Scholar] [CrossRef]
2. Jankowski R, Manguri A, Hassan H, Saeed N. Topology, size, and shape optimization in civil engineering structures: a review. Comput Model Eng Sci. 2025;142(2):933–71. doi:10.32604/cmes.2025.059249. [Google Scholar] [CrossRef]
3. Chen L, Zhang H, Wang W, Zhang Q. Topology optimization based on SA-BESO. Appl Sci. 2023;13(7):4566. doi:10.3390/app13074566. [Google Scholar] [CrossRef]
4. He J, Li X, Chen S, Xian H. Topology optimization of stiffened steel plate shear wall based on the bidirectional progressive structural optimization method. Buildings. 2023;13(3):831. doi:10.3390/buildings13030831. [Google Scholar] [CrossRef]
5. Burri S, Legay A. Static reinforcement and vibration reduction of structures using topology optimization. Mech Ind. 2023;24:14. doi:10.1051/meca/2023003. [Google Scholar] [CrossRef]
6. Beghini LL, Beghini A, Katz N, Baker WF, Paulino GH. Connecting architecture and engineering through structural topology optimization. Eng Struct. 2014;59(1898):716–26. doi:10.1016/j.engstruct.2013.10.032. [Google Scholar] [CrossRef]
7. Ghaemifard S, Ghannadiasl A. A comparison of metaheuristic algorithms for structural optimization: performance and efficiency analysis. Adv Civ Eng. 2024;2024(1):2054173. doi:10.1155/2024/2054173. [Google Scholar] [CrossRef]
8. Li Y, Duan RB, Li QS, Li YG, Huang X. Wind-resistant optimal design of tall buildings based on improved genetic algorithm. Structures. 2020;27(8):2182–91. doi:10.1016/j.istruc.2020.08.036. [Google Scholar] [CrossRef]
9. Kaveh A, Biabani Hamedani K, Milad Hosseini S, Bakhshpoori T. Optimal design of planar steel frame structures utilizing meta-heuristic optimization algorithms. Structures. 2020;25(3):335–46. doi:10.1016/j.istruc.2020.03.032. [Google Scholar] [CrossRef]
10. Carvalho JPG, Vargas DEC, Jacob BP, Lima BSLP, Hallak PH, Lemonge ACC. Multi-objective structural optimization for the automatic member grouping of truss structures using evolutionary algorithms. Comput Struct. 2024;292(3):107230. doi:10.1016/j.compstruc.2023.107230. [Google Scholar] [CrossRef]
11. Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of ICNN’95-International Conference on Neural Networks; 1995 Nov 27–Dec 1; Perth, WA, Australia. doi:10.1109/ICNN.1995.488968. [Google Scholar] [CrossRef]
12. Perez RE, Behdinan K. Particle swarm approach for structural design optimization. Comput Struct. 2007;85(19–20):1579–88. doi:10.1016/j.compstruc.2006.10.013. [Google Scholar] [CrossRef]
13. Gao Y, Qu M. Multi-objective particle swarm optimization algorithm based on the disturbance operation. In: Artificial intelligence and computational intelligence. Berlin/Heidelberg, Germany: Springer; 2011. p. 591–600. doi:10.1007/978-3-642-23881-9_76. [Google Scholar] [CrossRef]
14. Zhou D, Gao X, Liu G, Mei C, Jiang D, Liu Y. Randomization in particle swarm optimization for global search ability. Expert Syst Appl. 2011;38(12):15356–64. doi:10.1016/j.eswa.2011.06.029. [Google Scholar] [CrossRef]
15. Jansen PW, Perez RE. Constrained structural design optimization via a parallel augmented Lagrangian particle swarm optimization approach. Comput Struct. 2011;89(13–14):1352–66. doi:10.1016/j.compstruc.2011.03.011. [Google Scholar] [CrossRef]
16. Zhao SZ, Liang JJ, Suganthan PN, Tasgetiren MF. Dynamic multi-swarm particle swarm optimizer with local search for large scale global optimization. In: Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence); 2008 Jun 1–6; Hong Kong, China. doi:10.1109/CEC.2008.4631320. [Google Scholar] [CrossRef]
17. Shan W, Zhou X, Liu J, Ding Y, Zhou J. Two-stage automatic structural design of steel frames based on parametric modeling and multi-objective optimization. Struct Multidiscip Optim. 2024;67(6):104. doi:10.1007/s00158-024-03822-x. [Google Scholar] [CrossRef]
18. Negrin I, Kripka M, Yepes V. Metamodel-assisted design optimization in the field of structural engineering: a literature review. Structures. 2023;52(8):609–31. doi:10.1016/j.istruc.2023.04.006. [Google Scholar] [CrossRef]
19. Eslamlou AD, Huang S. Artificial-neural-network-based surrogate models for structural health monitoring of civil structures: a literature review. Buildings. 2022;12(12):2067. doi:10.3390/buildings12122067. [Google Scholar] [CrossRef]
20. Azimi M, Eslamlou AD, Pekcan G. Data-driven structural health monitoring and damage detection through deep learning: state-of-the-art review. Sensors. 2020;20(10):2778. doi:10.3390/s20102778. [Google Scholar] [PubMed] [CrossRef]
21. Han L, Xu A, Liu Z, Zheng Q. A knowledge-based optimization design method for supertall buildings with a strong outer frame structural system. J Build Eng. 2025;108(7):112803. doi:10.1016/j.jobe.2025.112803. [Google Scholar] [CrossRef]
22. Alanani M, Elshaer A. ANN-based optimization framework for the design of wind load resisting system of tall buildings. Eng Struct. 2023;285:116032. doi:10.1016/j.engstruct.2023.116032. [Google Scholar] [CrossRef]
23. Shan W, Liu J, Zhou J. Integrated method for intelligent structural design of steel frames based on optimization and machine learning algorithm. Eng Struct. 2023;284(3):115980. doi:10.1016/j.engstruct.2023.115980. [Google Scholar] [CrossRef]
24. Wu Y, Chen J, Zhu P, Zhi P. Finite element analysis of perforated prestressed concrete frame enhanced by artificial neural networks. Buildings. 2024;14(10):3215. doi:10.3390/buildings14103215. [Google Scholar] [CrossRef]
25. Abueidda DW, Almasri M, Ammourah R, Ravaioli U, Jasiuk IM, Sobh NA. Prediction and optimization of mechanical properties of composites using convolutional neural networks. Compos Struct. 2019;227(1):111264. doi:10.1016/j.compstruct.2019.111264. [Google Scholar] [CrossRef]
26. Abueidda DW, Koric S, Sobh NA. Topology optimization of 2D structures with nonlinearities using deep learning. Comput Struct. 2020;237:106283. doi:10.1016/j.compstruc.2020.106283. [Google Scholar] [CrossRef]
27. Tian J, Gurley KR, Diaz MT, Fernández-Cabán PL, Masters FJ, Fang R. Low-rise gable roof buildings pressure prediction using deep neural networks. J Wind Eng Ind Aerodyn. 2020;196(5):104026. doi:10.1016/j.jweia.2019.104026. [Google Scholar] [CrossRef]
28. Kien DN, Zhuang X. A deep neural network-based algorithm for solving structural optimization. J Zhejiang Univ Sci A. 2021;22(8):609–20. doi:10.1631/jzus.A2000380. [Google Scholar] [CrossRef]
29. Nguyen LC, Nguyen-Xuan H. Deep learning for computational structural optimization. ISA Trans. 2020;103(7):177–91. doi:10.1016/j.isatra.2020.03.033. [Google Scholar] [PubMed] [CrossRef]
30. Yang C, Yang J, Qin Y. Research on comparative of multi-surrogate models to optimize complex truss structures. KSCE J Civ Eng. 2024;28(6):2268–78. doi:10.1007/s12205-024-0196-3. [Google Scholar] [CrossRef]
31. Liu Z, Guo A. Empirical-based support vector machine method for seismic assessment and simulation of reinforced concrete columns using historical cyclic tests. Eng Struct. 2021;237(6):112141. doi:10.1016/j.engstruct.2021.112141. [Google Scholar] [CrossRef]
32. Cao H, Li H, Sun W, Xie Y, Huang B. A boundary identification approach for the feasible space of structural optimization using a virtual sampling technique-based support vector machine. Comput Struct. 2023;287:107118. doi:10.1016/j.compstruc.2023.107118. [Google Scholar] [CrossRef]
33. Cao H, Li M, Nie L, Xie Y, Kong F. Vertex-based graph neural network classification model considering structural topological features for structural optimization. Comput Struct. 2024;305:107542. doi:10.1016/j.compstruc.2024.107542. [Google Scholar] [CrossRef]
34. Bai J, Rabczuk T, Gupta A, Alzubaidi L, Gu Y. A physics-informed neural network technique based on a modified loss function for computational 2D and 3D solid mechanics. Comput Mech. 2023;71(3):543–62. doi:10.1007/s00466-022-02252-0. [Google Scholar] [CrossRef]
35. Sun L, Gao H, Pan S, Wang JX. Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data. Comput Meth Appl Mech Eng. 2020;361(4):112732. doi:10.1016/j.cma.2019.112732. [Google Scholar] [CrossRef]
36. Cobelli P, Shukla K, Nesmachnow S, Draper M. Physics informed neural networks for wind field modeling in wind farms. J Phys Conf Ser. 2023;2505(1):012051. doi:10.1088/1742-6596/2505/1/012051. [Google Scholar] [CrossRef]
37. Han Z, Ou J, Koyamada K. A physics-informed neural network-based surrogate model for analyzing elasticity problems in plates with holes. J Adv Simul Sci Eng. 2024;11(1):21–31. doi:10.15748/jasse.11.21. [Google Scholar] [CrossRef]
38. Ito S, Fukunaga R, Sako K. Inverse analysis for estimating geotechnical parameters using physics-informed neural networks. Soils Found. 2024;64(6):101533. doi:10.1016/j.sandf.2024.101533. [Google Scholar] [CrossRef]
39. Henkes A, Wessels H, Mahnken R. Physics informed neural networks for continuum micromechanics. Comput Meth Appl Mech Eng. 2022;393(4):114790. doi:10.1016/j.cma.2022.114790. [Google Scholar] [CrossRef]
40. de O Teloli R, Tittarelli R, Bigot M, Coelho L, Ramasso E, Le Moal P, et al. A physics-informed neural networks framework for model parameter identification of beam-like structures. Mech Syst Signal Process. 2025;224(4):112189. doi:10.1016/j.ymssp.2024.112189. [Google Scholar] [CrossRef]
41. Xiong W, Long X, Bordas SPA, Jiang C. The deep finite element method: a deep learning framework integrating the physics-informed neural networks with the finite element method. Comput Meth Appl Mech Eng. 2025;436(6):117681. doi:10.1016/j.cma.2024.117681. [Google Scholar] [CrossRef]
42. Zhu M, Wang J, Hu X, Dong S. Machine learning-based multi-objective prestress optimization framework of suspend dome structure and case study. Eng Struct. 2025;322(1):118987. doi:10.1016/j.engstruct.2024.118987. [Google Scholar] [CrossRef]
43. GB 50010-2010. Code for design of concrete structures. Beijing, China: China Architecture & Building Press; 2010. [Google Scholar]
44. GB 50011-2010. Code for seismic design of buildings. Beijing, China: China Architecture & Building Press; 2010. [Google Scholar]
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools