iconOpen Access

REVIEW

Physics-Informed Neural Networks: Current Progress and Challenges in Computational Solid and Structural Mechanics

Itthidet Thawon1,2, Duy Vo3,4, Tinh Quoc Bui3,4, Kanya Rattanamongkhonkun1, Chakkapong Chamroon1, Nakorn Tippayawong1, Yuttana Mona1, Ramnarong Wanison1, Pana Suttakul1,*

1 Department of Mechanical Engineering, Faculty of Engineering, Chiang Mai University, Chiang Mai, Thailand
2 Office of Research Administration, Chiang Mai University, Chiang Mai, Thailand
3 Duy Tan Research Institute for Computational Engineering (DTRICE), Duy Tan University, Ho Chi Minh City, Vietnam
4 Faculty of Civil Engineering, Duy Tan University, Da Nang, Vietnam

* Corresponding Author: Pana Suttakul. Email: email

(This article belongs to the Special Issue: Data-Driven Artificial Intelligence and Machine Learning in Computational Modelling for Engineering and Applied Sciences)

Computer Modeling in Engineering & Sciences 2026, 146(2), 2 https://doi.org/10.32604/cmes.2026.077044

Abstract

Physics-informed neural networks (PINNs) have emerged as a promising class of scientific machine learning techniques that integrate governing physical laws into neural network training. Their ability to enforce differential equations, constitutive relations, and boundary conditions within the loss function provides a physically grounded alternative to traditional data-driven models, particularly for solid and structural mechanics, where data are often limited or noisy. This review offers a comprehensive assessment of recent developments in PINNs, combining bibliometric analysis, theoretical foundations, application-oriented insights, and methodological innovations. A bibliometric survey indicates a rapid increase in publications on PINNs since 2018, with prominent research clusters focused on numerical methods, structural analysis, and forecasting. Building upon this trend, the review consolidates advancements across five principal application domains, including forward structural analysis, inverse modeling and parameter identification, structural and topology optimization, assessment of structural integrity, and manufacturing processes. These applications are propelled by substantial methodological advancements, encompassing rigorous enforcement of boundary conditions, modified loss functions, adaptive training, domain decomposition strategies, multi-fidelity and transfer learning approaches, as well as hybrid finite element–PINN integration. These advances address recurring challenges in solid mechanics, such as high-order governing equations, material heterogeneity, complex geometries, localized phenomena, and limited experimental data. Despite remaining challenges in computational cost, scalability, and experimental validation, PINNs are increasingly evolving into specialized, physics-aware tools for practical solid and structural mechanics applications.

Keywords

Artificial Intelligence; physics-informed neural networks; computational mechanics; bibliometric analysis; solid mechanics; structural mechanics

1  Introduction

Machine learning is a rapidly evolving field capable of processing and analyzing data from diverse perspectives [15]. It has become a widely used tool for understanding and predicting the behavior of target variables, with various machine learning algorithms demonstrating strong performance in identifying influential factors associated with these variables. As a data-driven approach, the effectiveness of machine learning largely depends on the quality and quantity of the training data. While machine learning models can capture complex relationships between inputs and outputs to predict the behavior of physical systems, they do not inherently incorporate or understand the underlying physical laws governing those systems.

In recent years, the integration of machine learning with traditional computational methods has led to significant advancements in tackling complex scientific and engineering challenges [610]. One of the notable innovations in this area is physics-informed neural networks (PINNs), which incorporate fundamental physical laws, such as partial differential equations (PDEs), directly into the training process of neural networks [1113]. By embedding governing equations into their structure, PINNs enable the modeling of physical systems in a way that leverages both data-driven learning and prior physical knowledge. As a result, PINNs can provide accurate and physically consistent solutions for both forward and inverse problems across a wide range of engineering applications [1418].

Classical numerical methods, such as finite element method (FEM) and finite difference method (FDM), have long been standard tools for solving PDEs in engineering problems. While these methods are reliable for complex systems, they often encounter challenges when addressing inverse analysis, high-dimensional problems, strong nonlinearities, and sparse data, issues common in solid mechanics. PINNs address these limitations by embedding known physical laws directly into their loss function, reducing reliance on extensive datasets and enhancing predictions’ physical consistency [19]. By leveraging both empirical data and fundamental equations, PINNs present a promising alternative that can significantly improve the efficiency and accuracy of simulations compared to purely data-driven approach [20].

PINNs have had a substantial impact across various areas of computational mechanics [2125]. Especially in the field of solid and structural mechanics, they have been employed to model complex material behavior, perform structural analyses, predict stress–strain responses under various loading conditions, and identify unknown material properties or damage through inverse modeling.

Despite their advantages, implementing PINNs has challenges, particularly concerning computational efficiency and training complexity [25,26]. Optimizing network parameters while ensuring compliance with physical laws can lead to high computational costs, especially for high-dimensional problems. Furthermore, the performance of PINNs is sensitive to network architecture and initialization strategies, which can affect convergence rates and solution accuracy [27,28]. Researchers have been focusing on overcoming these challenges by employing advanced training methods, such as domain decomposition techniques and adaptive loss balancing, to make PINNs more suitable for large-scale and real-time applications [2931]. In addition, recent studies have explored the integration of discretized physics, such as finite element formulations, into neural network architectures as an alternative to continuous governing equations, with the aim of reducing computational burden while preserving physical fidelity [3236].

Although several review articles have examined PINNs from broad, cross-disciplinary perspectives, a focused synthesis dedicated to solid and structural mechanics remains limited. This study aims to provide a focused bibliometric analysis and critical review PINNs with explicit emphasis on computational solid and structural mechanics, a domain that poses distinct challenges compared to generic PDE problems. This work makes three specific contributions. First, it presents a solid-mechanics-centered bibliometric analysis that identifies research trends, influential publications, and thematic clusters. Second, the review organizes existing studies into a mechanics-oriented classification, distinguishing between forward analysis, inverse identification, optimization, damage and fracture modeling, and manufacturing-related applications. Third, by synthesizing methodological advances alongside application case studies, this paper highlights mechanics-specific limitations and research gaps that hinder practical adoption. Note that, during the preparation of this manuscript, an AI-assisted tool was used as a support tool to summarize key ideas from the literature and improve clarity of language. The tool did not contribute to the generation of scientific content, data analysis, or interpretation. Through this combined bibliometric, application, and technical perspective, the study advances understanding of how PINNs can be effectively deployed in solid and structural mechanics and provides actionable directions for future research.

2  Bibliometric Analysis

Bibliometric analysis is a valuable approach for systematically understanding scholarly research landscapes [37,38]. In this study, bibliometric techniques were applied to evaluate the contributions of authors, countries, and journals to the advancement of PINNs. The influence of research articles was further assessed through citation counts, while keyword analysis was conducted to reveal thematic trends within the field. To visualize these relationships, VOSviewer software was employed [3942]. VOSviewer is a specialized tool for constructing and exploring bibliometric networks, including co-authorship, co-citation, and keyword co-occurrence analyses. By generating network maps from bibliographic data, it enables the identification of research structures, emerging themes, and collaboration patterns. The software also provides diverse visualization and quantitative metrics, supporting a deeper interpretation of complex bibliometric information [4345].

Data collection is an essential step prior to conducting the bibliometric analysis, which involved sourcing relevant publications from database according to predefined criteria. The overall methodological framework is illustrated in Fig. 1. To ensure consistency and avoid duplication, a single database was selected [46]. Scopus was chosen due to its accessibility and comprehensive coverage of scientific documents [4749]. The timespan was restricted to 2018–2025 (data accessed on 27 September 2025), with results limited to documents written in English and classified under the subject area of Engineering.

images

Figure 1: The detailed procedure of bibliometric analysis. The asterisk symbol ( ) denotes a wildcard operator in the Scopus advanced search, which is used to retrieve all word variants sharing the same root.

The search strategy was carefully designed to capture the most relevant studies. Initially, the keyword “physics-informed” was required to appear in the title field. This was combined with machine learning-related terms, namely “machine learning”, “deep learning”, or “neural network*”, also within the title field. To further refine the scope toward solid and structural mechanics, additional terms such as solid, structur*, material*, and mechanic* were searched within the title, abstract, and keywords. At the same time, unrelated domains were excluded by filtering out publications containing fluid*, flow*, or thermal* in these fields. Accordingly, the final advanced search query was formulated as follows:

(TITLE(physics-informed) AND TITLE (“machine learning” OR “deep learning” OR “neural network*”) AND TITLE-ABS-KEY (solid* OR structur* OR material* OR mechanic*) AND NOT TITLE-ABS-KEY (fluid* OR flow* OR thermal*)) AND PUBYEAR > 2017 AND PUBYEAR < 2026 AND (LIMIT-TO (SUBJAREA, “ENGI”)) AND (LIMIT-TO (LANGUAGE, “English”))

The above search strategy was intentionally designed to prioritize precision and reproducibility in bibliometric trend and cluster analyses. As a consequence, relevant studies that employ PINN methodologies but use alternative terminology (e.g., “scientific machine learning,” “physics-guided,” or application-specific naming) or focus on strongly coupled thermomechanical problems may not be fully captured in the bibliometric dataset. This represents a trade-off between precision and coverage. To mitigate this limitation, the narrative review presented in later sections of this paper was based on broader searches across titles, abstracts, and keywords, using expanded keyword combinations and manual screening, ensuring comprehensive coverage of PINN applications in solid and structural mechanics.

Using this bibliometric search strategy, a total of 660 documents related to the field were identified during the period 2018–2025 (up to 27 September 2025). The bibliographic dataset, comprising information such as authors, titles, keywords, and document types, was retrieved from the Scopus database for subsequent analysis. Fig. 2 illustrates the annual distribution of publications, which demonstrates a steady upward trajectory over the study period. The research output expanded from a single publication in 2018 to 244 publications by September 2025, highlighting the growing scholarly interest in the topic.

images

Figure 2: Annual and cumulative publication trends (2018–2025), highlighting rapid growth and the predominance of journal articles (up to 27 September 2025).

In terms of document types, journal articles dominate the dataset, accounting for 74.55% of total publications, followed by conference papers (21.67%), review papers (2.12%), book chapters (0.76%), and other categories (0.91%). This distribution underscores the central role of peer-reviewed journal articles in advancing the field, while conference papers also contribute significantly to disseminating emerging findings.

The ranking of the top 10 countries/territories by publication output is presented in Fig. 3. China and the United States dominate the field, with 286 and 182 publications, respectively. Despite producing fewer documents, the United States demonstrates a comparatively higher citation count, reflecting a strong impact within the scholarly community. Germany, the United Kingdom, and India follow, each contributing more than 30 publications. Hong Kong, South Korea, Australia, and France each produced over 20 publications, while Italy contributed 19 publications accompanied by relatively higher citation counts.

images

Figure 3: Top 10 countries/territories contributing to PINNs in solid and structural mechanics.

To highlight the global distribution of research, Fig. 4 presents scientific production in a world map format. Darker shades indicate countries with higher research output, while lighter shades correspond to lower levels of activity. The analysis underscores substantial international engagement with PINNs in solid and structural mechanics, with China and the United States emerging as the leading contributors. European countries such as Germany, the United Kingdom, and France also make notable contributions, while significant outputs are also evident in India, Hong Kong, Republic of Korea, and Australia. Importantly, the dataset also includes countries with only a single publication, demonstrating the widespread and global nature of research efforts in this field.

images

Figure 4: Global map of PINN-related scientific production in solid and structural mechanics.

Publications in this field are primarily concentrated in peer-reviewed journals, as illustrated in Fig. 5, which presents the top 10 publication sources ranked by document output. The analysis highlights the dual dimensions of journal productivity and scholarly impact. The leading source is Computer Methods in Applied Mechanics and Engineering, with 35 publications and the highest citation count (2441 citations), indicating its central role in disseminating influential research. This is followed by Engineering Applications of Artificial Intelligence with 20 publications and 445 citations. Other notable sources include Engineering Structures and Mechanical Systems and Signal Processing, each contributing more than 10 publications. Additional journals such as the International Journal of Fatigue, Engineering Fracture Mechanics and Engineering with Computers also exhibit substantial activity, reflecting their significance as outlets for advancing knowledge in this domain. Collectively, these findings underscore that high-impact research on PINNs in solid and structural mechanics is being published across both general computational mechanics journals and specialized engineering outlets.

images

Figure 5: Top 10 publication sources for PINNs in solid and structural mechanics.

PINNs have attracted considerable global attention, fostering extensive collaboration among researchers. Fig. 6, generated using VOSviewer, illustrates the biggest co-authorship network in this field, where different colors denote distinct collaborative clusters. Larger nodes represent more prolific authors, while the proximity between nodes indicates the strength of co-authorship links. The analysis highlights several prominent clusters. Notably, Gu, Yuantong and Bai, Jinshuai form central nodes in one of the largest clusters (green), connecting with researchers such as Jeong, Hyogu, Batuwatta-Gamage, C.P., and Zhou, Ying, reflecting strong collaborative activity. Another influential cluster (red) is centered on Timon, Rabczuk, demonstrating a tightly connected research community. Additional clusters include groups led by Tang, Ming (blue), Li, Zilin (purple), Zhuang, Xiaoying (yellow), and Gui, Yilin (light blue) each linking networks of researchers working on related topics. Overall, the network structure demonstrates that while the field is composed of multiple independent research groups, several key authors act as central connectors. These authors facilitate cross-group collaborations, shaping research patterns and advancing the development of PINNs in solid and structural mechanics.

images

Figure 6: Visualization of author network in PINN research for solid and structural mechanics, showing clustered collaboration patterns and key researchers acting as central hubs across research.

The top 10 authors with the highest number of publications are listed in Fig. 7, confirming their prominence with larger nodes illustrated in Fig. 6. The results reveal that while several authors have produced a relatively modest number of publications, their impact, as measured by citations, is substantial. For instance, Gu, Yuantong leads the list with 10 publications and 396 citations, followed closely by Bai Jinshuai with 9 publications and 382 citations. Notably, Timon Rabczuk, despite publishing only 6 documents, has received the highest citation count (849 citations), underscoring his significant scholarly influence in the field.

images

Figure 7: Top 10 authors in PINN-related solid and structural mechanics with the highest number of publications.

The 10 most-cited papers in the field are summarized in Table 1. The most influential work is Cuomo et al. [11] in the Journal of Scientific Computing with 1313 citations, followed by Haghighat et al. [50] and Goswami et al. [51]. Computer Methods in Applied Mechanics and Engineering accounts for 4 of the top 10 papers, underscoring its central role in disseminating PINNs research in mechanics. Of these publications, nine are research articles and one is a review, indicating that impact is largely driven by original contributions. The temporal distribution spans 2020–2023; while earlier works (2020–2022) have naturally accumulated more citations, several 2023 publications have already surpassed 150 citations, reflecting rapid uptake in the community. Thematically, the most-cited studies cluster around methodological innovations (e.g., inverse problems, boundary condition enforcement, training efficiency) and applications in mechanics (fracture, elastodynamics, reliability, micromechanics, and fatigue).

images

The current research areas involving PINNs were identified through a keyword co-occurrence analysis of the publication network, as illustrated in Fig. 8. The visualization highlights three dominant thematic clusters. The first cluster, shown in green, is associated with numerical methods, encompassing terms such as partial differential equations, boundary conditions, elasticity, and topology optimization. This cluster reflects the methodological foundation of PINNs, emphasizing their role as computational frameworks for solving complex PDEs and for optimizing the size, shape, and topology of structures. The second cluster, represented in blue, focuses on structural analysis, with keywords including nonlinear analysis, structural health monitoring, structural dynamics, inverse problems, and parameter identification. These terms indicate the increasing application of PINNs in solid and structural mechanics, where traditional numerical approaches often face challenges in handling complex boundary conditions, real-time monitoring, or inverse modeling. The third cluster, depicted in red, emphasizes forecasting and reliability-oriented applications, characterized by terms such as reliability analysis, fault detection, fatigue life prediction, fracture mechanics, additive manufacturing, and battery technologies. This demonstrates the extension of PINNs into predictive modeling and performance assessment, particularly in engineering systems where safety and durability are critical.

images

Figure 8: Visualization of keyword co-occurrence in PINN research for solid and structural mechanics, revealing three major thematic clusters associated with numerical methods, structural analysis, and forecasting.

Complementing this analysis, the keyword density map in Fig. 9 illustrates the frequency and strength of associations among terms within the domain. Darker regions indicate concentrated research interest, with prominent hotspots around neural networks, machine learning, numerical methods, partial differential equations, forecasting, and structural analysis. This distribution suggests that PINNs research is most active at the intersection of numerical modeling, structural mechanics, and predictive applications, where the dual strengths of physics constraints and data-driven learning are most beneficial. Meanwhile, emerging topics such as additive manufacturing, structural health monitoring, and parameter identification highlight the field’s diversification, pointing to future opportunities for applying PINNs in advanced manufacturing, infrastructure resilience, and reliability engineering.

images

Figure 9: Keyword density map showing core research themes in PINN research for solid and structural mechanics.

These maps reveal both the maturity of PINNs in mechanics-focused applications and their growing adaptability across broader engineering and scientific domains. The prominence of methodological terms indicates a strong emphasis on refining computational efficiency and accuracy, while the spread of application-oriented keywords underscores the community’s drive to translate these methods into practical solutions with tangible industrial and societal impact.

Another perspective is provided by the treemap in Fig. 10, which displays the most frequent keywords found in publications related to PINNs, excluding terms such as neural network and deep learning to focus solely on applications. The most frequent keyword, inverse problems, underscores the central role of PINNs in addressing ill-posed formulations where traditional methods often face limitations. The next most frequent, forecasting, reflects their growing importance in predictive modeling across diverse engineering fields. Numerical methods further emphasize the methodological foundation of PINNs, while structural health monitoring illustrates their increasing use in diagnostics and condition assessment. Other prominent terms, including boundary conditions, partial differential equations, and nonlinear equations, highlight the integration of physics-based constraints into learning frameworks, and finite element method reflects the synergy between PINNs and established computational mechanics approaches. Keywords such as loss functions, computational efficiency, and transfer learning indicate strong interest in improving training strategies, performance, and adaptability. Meanwhile, entries such as structural dynamics, structural analysis, elasticity, solid mechanics, and degrees of freedom point to their widespread applications in classical mechanics. Additional high-frequency terms, fatigue life prediction and parameter estimation, emphasize PINNs’ applications in damage prediction and inverse modeling. Uncertainty analysis reflects ongoing efforts to quantify prediction reliability, while surrogate modeling demonstrates interest in efficient approximations and model reduction. Collectively, the treemap portrays a field deeply rooted in computational solid and structural mechanics, while simultaneously diversifying into broader engineering domains.

images

Figure 10: Treemap of the top 20 most frequently associated keywords with PINNs.

3  Fundamentals of Physics-Informed Neural Networks

This section conveys a brief description of mathematical fundamentals of PINNs. Concretely, since the interest of this review is applications of PINNs in the field of solid and structural mechanics, the linear elasticity is presented for the sake of concise discussions. However, all aspects of the discussion can be straightforwardly extended to other physical phenomena.

In the context of linear elasticity with Cauchy-Boltzmann continuum theory, the deformation of a continuum body is governed by

σij,j+fi=0 inΩ(1)

σij=λδij+2μεij inΩδΩ(2)

εij=12(ui,j+uj,i) inΩδΩ(3)

σijnj=ti on δΩn(4)

ui=u¯i on δΩd(5)

where Ω denotes the geometrical domain representing the continuum body, and δΩ is the boundary of Ω. In Eq. (1), the equilibrium equations are expressed in terms of the Cauchy stress components σij and the body force fi. A Cartesian coordinate is denoted as xi, and the differentiation with respect to this coordinate is written as (·), i. Einstein’s summation convention is used for duplicated indices. The indices start from 1, and depending on the geometrical dimension, they end at either 2 or 3. The constitutive relations in Eq. (2) show how the stress components are linked to the strain components εij, which are calculated as the gradients of displacement components ui. For linear isotropic elastic materials, the constitutive relations are entirely described by Lamé’s constants λ and μ. Regarding boundary conditions, the traction forces ti are applied on δΩn and the displacement components u¯i are prescribed on δΩd. Here, δΩnδΩd=δΩ and δΩnδΩd=. Furthermore, ni represents a component of the outward unit normal vector of δΩn.

To solve the above boundary-value problem with PINNs, neural networks are used as approximators of kinematic quantities, e.g., stress components and displacement components. The most common implementation is to approximate the displacement components by neural networks, and other quantities such as stress and strain components can be obtained through the differentiation of the displacement components, i.e., Eqs. (2) and (3). This is equivalent to the classical displacement formulation. Alternatively, displacement components together with stress or strain components are approximated by neural networks, and this approach is known as mixed formulation. Obviously, the mixed formulation requires more trainable parameters than the displacement counterparts, but it involves lower degree of differentiation. In this discussion, the mixed formulation is selected, and therefore, the displacement components and stress components are approximated with independent neural networks, i.e.,

ui(x)ui(x)(6)

σij(x)σij(x).(7)

Here, x=[x1x2x3]T. For two-dimensional problems, the component x3 is omitted. These notations imply that spatial coordinates are treated as the input of neural networks, and the outputs are considered as kinematic quantities.

Once specific architectures of neural networks are selected, a loss function 1 is constructed as

1=λeq|σij,j+fi|Ω+λcr|σijλδij2μεij|Ω+λtr|σijnjti|δΩn+λdi|uiu¯i|δΩd(8)

where the norm |g|Γ of a generic quantity g over the domain Γ is defined as

|g|Γ=1Nk=1Ng(xk)2(9)

with xk implying a spatial point located in Γ, and N indicates the number of spatial points. Furthermore, the contribution of different terms into the loss function is weighted by the penalty coefficients λeq, λcr, λtr, and λdi. The neural networks of kinematic quantities are trained by minimizing the loss function 1. A conceptual flowchart is sketched in Fig. 11.

images

Figure 11: Basic architecture of a PINN.

In practice, the boundary conditions can be satisfied by construction, i.e.,

ui(x)ui(x)Di(x)+Hi(x).(10)

Here, Di(x)=0 and Hi(x)=u¯i for all points on δΩd. For points on Ω and δΩn, these functions can have arbitrary values. With this implementation, the boundary conditions on δΩd are satisfied automatically. Regarding the boundary conditions on δΩn, the similar approach can be considered. Consequently, the loss function 1 can be reduced as

2=λeq|σij,j+fi|Ω+λcr|σijλδij2μεij|Ω.(11)

The reduction of terms in the loss function is beneficial for the faster and easier convergence of the training process [59]. Regardless of this, the components from the equilibrium equations and constitutive relations still possess different units, i.e., force per volume and force per area. Therefore, careful selection of the penalty coefficients λeq and λcr is crucial for successful training. Several methods have been proposed for the automatic tuning of these coefficients, e.g., [60,61].

However, the direct minimization of the governing equations is not the only way to obtain the solution in PINNs. For the linear elasticity, there exists an “energy functional” Π(ui) such that the boundary-value problem in Eqs. (1)(5) can be obtained as the minimization of Π(ui). The mathematical expression of the energy functional is given as

Π=12ΩσijεijdΩΩfiuidΩδΩttiuidδΩt.(12)

If the displacement components are considered as kinematic unknowns and neural networks are used to approximate these quantities, the training process is carried out by minimizing the loss function 3=Π. Here, it should be mentioned that the boundary conditions on δΩd is enforced as shown in Eq. (10).

For the forward problems where the interest is the solution for displacement components, the minimization of the loss function 3 provides better accuracy than the loss function 2, e.g., [20]. However, for the inverse problems where both the kinematic quantities and material properties are of interest, the minimization of the energy functional is no longer feasible [62]. In this scenario, the minimization of the loss functions 1 and 2 is preferable.

4  Current Research of Physics-Informed Neural Networks in Computational Solid and Structural Mechanics

Physics-informed neural networks have been extended to a broad range of solid and structural mechanics problems. As illustrated in Fig. 12, their applications can be broadly classified into five major domains: structural analysis, parameter identification, structural optimization, structural integrity and durability, and manufacturing processes. Representative studies within each category demonstrate how different PINN variants are formulated and tailored to address specific physical challenges. The following subsections provide a detailed discussion of these domains, highlighting how flexible and effective PINNs can be when dealing with different problems from the solid mechanics domain.

images

Figure 12: Major application domains of PINNs in solid and structural mechanics.

4.1 Structural Analysis

Structural analysis is a fundamental task in solid and structural mechanics, aiming to determine displacement, strain, and stress fields under prescribed loads, boundary conditions, and material properties. Classical numerical solvers, particularly the FEM, have long been the dominant tools for such analyses. Recently, PINNs have emerged as an alternative computational paradigm by embedding governing equations directly into neural network training, enabling mesh-free approximation of physical responses and offering increased flexibility for parametric and data-scarce scenarios. Most PINN-based formulations in this context take spatial coordinates as primary inputs, together with boundary conditions and material parameters (e.g., Young’s modulus and Poisson’s ratio), and predict displacement or stress fields as outputs [6365]. These models have demonstrated strong performance across a wide range of material behaviors, including linear elasticity [50,66,67], hyperelasticity [66,68], plasticity [65,66,69,70], and elastoplasticity [50,64,7072], in one-, two-, and three-dimensional domains [63,7375], highlighting their versatility for forward structural analysis.

A significant body of work has focused on beam-type structural elements, where governing equations involve high-order spatial derivatives and accurate recovery of internal forces is critical. For example, the PINN formulation illustrated in Fig. 13a, applied to a classical Euler–Bernoulli cantilever beam, enables simultaneous prediction of deflection, rotation, bending moment, and shear force by directly enforcing the governing physics [76]. This capability is particularly attractive for beam, plate, and shell structures, where high-order operators and derivative-based postprocessing often amplify numerical errors in conventional data-driven models. These studies collectively demonstrate that PINNs can serve as continuous-field solvers that deliver smooth, physically consistent solutions for structural members governed by high-order differential equations.

images

Figure 13: Representative applications of PINNs in structural analysis: (a) Analysis of a 1D cantilever beam [76], (b) Analysis of 2D Frame Structures [77], and (c) Simulation of laminated composite plates [79].

Beyond individual structural components, PINNs have also been applied to multi-member systems such as frame structures. In these formulations, separate neural networks represent each structural member, while compatibility and equilibrium conditions are imposed at joints, as shown in Fig. 13b [77]. This framework enables the simultaneous analysis of interconnected systems without explicit mesh connectivity, making it attractive for parametric studies and topology modifications. However, such applications also reveal challenges specific to structural assemblies, including the need to properly represent force transmission, stiffness discontinuities, and load redistribution across interfaces. These issues highlight the importance of robust boundary and interface treatments, which are discussed in detail in the next section.

For 2D structural components, PINNs have been successfully applied to thin-walled structures and plate-like systems. Early studies highlighted their ability to handle geometric complexity and small thickness-to-length ratios more flexibly than traditional mesh-based schemes [63]. PINN frameworks based on the Föppl–von Kármán equations have been developed for finite-deformation analysis of thin elastic plates, achieving high accuracy at reduced computational cost [20]. Subsequent extensions have addressed more complex plate behaviors, including variable thickness and stiffness distributions [78]. These applications demonstrate that PINNs can naturally accommodate nonlinear kinematics and complex geometries while providing smooth approximations of displacement and stress fields.

PINNs have also shown strong potential for laminated and composite structures, where anisotropy, heterogeneity, and interlayer coupling pose substantial challenges for conventional solvers. As illustrated in Fig. 13c, physics-informed formulations grounded in laminate theory enable accurate prediction of layerwise displacement and stress fields without explicit meshing of material interfaces [79], highlighting the ability of PINNs to alleviate the computational burden associated with mesh generation and interface tracking in layered structures. To address heterogeneity and discontinuities in more general settings, domain-decomposition-based PINN frameworks have been developed to accurately capture interface behavior in multi-material systems [80]. At the structural member level, PINN-based models have been reported for functionally graded porous beams [81] and for large-deflection responses in functionally graded graphene-platelet–reinforced auxetic sandwich beams on nonlinear foundations [82], highlighting the applicability of PINNs to problems involving strong material heterogeneity and geometric nonlinearity. Additional applications include wave propagation in laminated structures [83] and accelerated characterization of composite micromechanics using semi-supervised learning strategies [84]. Methodological developments that support reliable treatment of heterogeneity and interfaces (e.g., subdomain- or interface-aware formulations) are consolidated in the next section.

Multiscale and size-dependent structural behavior is another important application area for PINNs, particularly in problems where classical continuum models struggle to capture scale effects and long-range interactions. Integrating numerical simulations with machine learning has been recognized as a promising approach to addressing such multiscale challenges [85]. In this context, PINN-based frameworks grounded in nonlocal strain-gradient theories have been developed to model wave propagation in nanostructured sandwich plates [86], homogenize lattice materials [87], and capture the bending of nanobeams on nonlinear elastic foundations [88]. These applications highlight PINNs’ ability to represent size effects, long-range interactions, and multiscale coupling within a unified computational framework, offering new opportunities for modeling architected materials and nanostructured systems.

More recently, PINNs have been explored as surrogate models for real-time structural analysis and health monitoring. By learning continuous mappings from limited input data to full-field stress and displacement distributions, PINN-based surrogates enable rapid inference that would otherwise require expensive numerical simulations. For example, Go et al. [89] developed a PINN surrogate capable of predicting full-field displacement and stress distributions in near real time for a 2D plate with a hole under unknown biaxial tension. Such approaches highlight the potential of PINNs for real-time decision-making, digital twins, and condition-based maintenance of structural systems.

The studies reviewed in this subsection demonstrate that PINNs are increasingly adopted as flexible forward solvers for a wide range of structural analysis problems, from simple beams to complex multiscale and heterogeneous systems. While these applications highlight their potential, they also reveal recurring challenges related to stability, boundary-condition enforcement, interface treatment, and scalability. The methodological innovations developed to address these issues, such as adaptive training, hard boundary constraints, domain decomposition, and hybrid FEM–PINN formulations, are consolidated and discussed in Section 5.

4.2 Parameter Identification

Material parameter identification is a central inverse problem in solid and structural mechanics, aimed at determining mechanical properties, such as Young’s modulus and Poisson’s ratio, from indirect measurements, including displacement, strain, or vibration responses [90,91]. This process plays a critical role in various fields, including structural health monitoring [9294], non-destructive testing [9597], and failure analysis [98]. It is also particularly important for multi-material systems [99], composite structures [100], and functionally graded materials (FGMs) [23], in which mechanical properties vary spatially and cannot be characterized by a single set of parameters. Additionally, parameter identification plays a vital role in biomedical applications, where accurate characterization of tissue stiffness supports the diagnosis of pathological conditions, including cancer and fibrosis [101104]. In many of these contexts, direct measurement of material properties is either impractical or impossible, motivating the use of physics-informed learning frameworks that can infer unknown parameters from limited observational data.

In medical elastography, PINN-based frameworks have made significant advances in reconstructing heterogeneous material fields. Chen and Gu [101] introduced the ElastNet model, which accurately predicts spatially varying Young’s modulus and displacement fields in soft tissues with complex stiffness profiles, as illustrated in Fig. 14a. Similarly, Kamali et al. [102] used PINNs for elasticity imaging in hydrogel-like materials, estimating variations in both Young’s modulus and Poisson’s ratio by combining strain measurements with physical constraints. These approaches improve image resolution and diagnostic reliability in biomedical applications, where non-invasive stiffness estimation is essential. In biomechanics, Wu et al. [105] introduced a Fourier-feature PINN that reconstructs full-field heterogeneous hyperelastic parameters in tissues undergoing large deformation. Capable of learning Neo-Hookean, Mooney–Rivlin, and Gent constitutive models from a strain snapshot and robust to 10% noise, this method offers a high-fidelity solution for nonlinear inverse elasticity problems with very limited experimental data.

images

Figure 14: Representative applications of PINNs in parameter identification: (a) ElastNet model for elastography [101], (b) Identification of constant material properties in beams [106], and (c) Reconstruction of spatially varying elastic modulus in beams [22].

For engineered materials, Yan et al. [100] extended the application of inverse problems in composite materials using a hybrid PINN and extreme learning machine (ELM) framework. The integration of fast learning through ELM and domain decomposition enabled the framework to efficiently handle large and complex structural assemblies. The method accurately identified unknown material parameters, such as fiber orientation and stiffness, with errors below 2% compared to FEM solutions, demonstrating its effectiveness in inverse problems of composite materials. Additionally, Bharadwaja et al. [99] applied PINNs to heterogeneous materials, showing that PINNs can accurately capture stress discontinuities at material interfaces. Their study highlighted the superior performance of PINNs over traditional methods in predicting displacement fields, stress distributions, and effective material properties in multi-material systems.

Recent developments have further expanded the reach of PINN-based parameter identification across static, dynamic, and nonlinear regimes. As a baseline case, Wu et al. [106] proposed a PINN-based inverse framework for static beam problems with a constant material properties, demonstrating accurate identification of Young’s modulus and Poisson’s ratio from sparse displacement data, as depicted in Fig. 14b. Their results show strong robustness to measurement noise and good agreement with exact solutions, establishing the effectiveness of PINNs for material parameter identification in quasi-static solid mechanics. For dynamic systems, Söyleyici and Ünver [107] introduced a PINN-based approach enhanced with neural tangent kernel (NTK) theory to identify stiffness, damping, and modal parameters in free-vibration beams, achieving inverse errors below 1.41% relative to FEM simulations and experimental measurements. In addition, de O Teloli et al. [22] proposed a PINN-based framework for Euler–Bernoulli beams that estimates displacement, a spatially varying elastic modulus, and damping directly from vibration data. Their formulation enables accurate recovery of both the real and imaginary components of the complex modulus in the frequency domain, as illustrated in Fig. 14c, representing one of the few successful demonstrations of damping identification using PINNs in structural systems.

Beyond neural-network-based approaches, physics-informed probabilistic models have also been explored for inverse mechanics. For example, Tondo et al. [108] proposed a physics-informed Gaussian process (PIGP) framework for static inverse problems in Euler–Bernoulli beams, enabling the identification of unknown elastic parameters and distributed loads while simultaneously providing uncertainty quantification. This Bayesian formulation improves robustness against noise and sparse measurements, offering an attractive alternative for inverse mechanics applications.

These studies demonstrate that physics-informed learning frameworks are increasingly effective tools for identifying material and structural parameters across a wide range of applications. Their ability to infer stiffness, damping, and distributed forces from limited or noisy measurements highlights their promise for structural health monitoring, biomedical diagnostics, and multiscale material characterization. These applications also reveal recurring challenges with stability, noise sensitivity, and scalability, which have motivated the development of specialized training strategies, loss formulations, and hybrid solvers. These methodological advances are consolidated and discussed in Section 5.

4.3 Structural Optimization

Structural optimization is an essential part of modern engineering design, especially for creating lightweight and efficient systems. Traditional methods, mainly FEM-based analyses combined with gradient-based optimization, often face high computational costs, mesh dependency, and limited scalability when handling complex geometries or nonlinear material behavior. Recent progress in PINNs has introduced mesh-free and differentiable optimization frameworks that can enhance both computational efficiency and solution accuracy.

Early demonstrations of PINN-based structural optimization focused on replacing or augmenting conventional solvers. For instance, Mai et al. [109] proposed the physics-informed neural energy-force network (PINEFN), which embeds governing equations directly into the optimization process and eliminates the need for traditional structural analysis. This framework enables efficient optimization of truss structures, yielding competitive designs with significantly reduced computational demands. Similarly, Wu et al. [110] proposed a PINN-enabled surrogate-based optimization framework for geometrically nonlinear structural systems, where a trained neural network replaces repeated finite element analyses during optimization, as illustrated in Fig. 15a. By coupling the surrogate PINN with a metaheuristic optimizer, their approach enables efficient size, shape, and topology optimization of dome and truss structures while maintaining accuracy comparable to FEM-based optimization. The reported results show reductions in computational cost, further highlighting the potential of PINN-based and PINN-assisted frameworks for large-scale, solver-free structural optimization.

images images

Figure 15: Representative applications of PINNs in structural optimization: (a) Truss and dome optimization [110], (b) Topology optimization for 2D and 3D structures [112], and (c) Topology optimization for multi-material systems [113].

Topology optimization has emerged as a major application area for PINNs, where the goal is to determine the optimal material distribution within a prescribed design domain. Jeong et al. [111] introduced a physics-informed neural network-based topology optimization (PINNTO) framework that replaces conventional FEM-based solvers. By integrating energy-based PINNs with the solid isotropic material with penalization (SIMP) scheme, their method achieved accuracy comparable to classical approaches while avoiding explicit mesh-based discretization. Building on this idea, Jeong et al. [112] proposed the Complete physics-informed neural network topology optimization (CPINNTO) framework, shown in Fig. 15b, which employs two coupled PINNs to estimate stress fields and update material distributions. This approach was validated on tip-loaded cantilever beams, demonstrating robustness and improved computational efficiency.

In addition to classical compliance minimization problems, recent studies have extended PINN-based approaches to more challenging topology optimization tasks, including multiscale, multimaterial, and geometrically nonlinear designs [113]. An example of multi-material topology optimization is illustrated in Fig. 15c. Furthermore, Lai et al. [114] proposed a dual-energy physics-informed multi-material topology optimization approach that integrates a dual-network deep-energy PINN with a phase-field formulation. This unified framework enables the construction of smooth material interfaces and high-quality multi-material layouts. Jeong et al. [115] enhanced PINN-based topology optimization by incorporating Fourier-feature embeddings and periodic activation functions, enabling robust optimization under strongly nonlinear, large-deformation hyperelastic behavior and improving convergence in geometrically nonlinear design scenarios. Yin et al. [116] proposed a discrete PINN (dPINN) framework that allows topology optimization in large-scale three-dimensional domains by using local mesh-based interpolation, enabling the treatment of problems with millions of degrees of freedom. These developments demonstrate that PINNs can be adapted to increasingly complex design settings that are difficult to handle using conventional solvers.

PINNs have also been applied to optimizing discontinuous or voxel-based structures, which are common in digital and additive manufacturing (AM) contexts. Zhang et al. [117] proposed a weak-form PINN formulation tailored to digital materials, enabling accurate treatment of sharp stiffness jumps without requiring explicit analytical sensitivities. Moreover, Zhao et al. [118] developed a PINN-based topology optimization framework using a deep energy method (DEM) and continuous adjoint sensitivity analysis evaluated by automatic differentiation, removing discretization errors and providing accurate solutions for both self-adjoint and non-self-adjoint optimization problems. These applications demonstrate that PINNs can naturally accommodate non-smooth material distributions and complex design spaces.

These studies show that PINN-based structural optimization is evolving into a versatile alternative to conventional FEM-driven design pipelines. Their ability to support solver-free or solver-accelerated workflows, reduce mesh dependence, and handle nonlinear, multi-material, and large-deformation problems highlights their promise for next-generation computational design. At the same time, these applications reveal recurring challenges related to training stability, scalability, and boundary enforcement, which have motivated the development of specialized formulations, adaptive strategies, and hybrid frameworks. These methodological advances are synthesized and discussed in Section 5.

4.4 Structural Integrity and Durability

Assessing structural integrity and durability is essential for ensuring the long-term performance and safety of engineering components, particularly under conditions involving damage initiation, crack propagation, and repeated cyclic loading. Recent developments in physics-informed learning provide a promising pathway, improving physical consistency, extrapolation capability, accuracy, interpretability, and data efficiency in structural integrity applications [119,120]. Within this context, fracture mechanics has emerged as a natural starting point, as understanding crack initiation and propagation forms the basis for predicting more complex durability phenomena such as fatigue.

Fracture mechanics and crack modeling, characterized by crack propagation, discontinuities, and material failures in complex structures, present significant challenges that can be effectively addressed using PINNs. Gu et al. [121] developed an enriched PINN framework specifically designed for fracture mechanics applications. By incorporating crack-tip singularities and discontinuities directly into the model, the enriched PINNs accurately predicted stress intensity factors (SIFs), as illustrated in Fig. 16a. The framework is a mesh-free and efficient alternative for modeling complex fracture behaviors without the need for traditional meshing techniques. Ning et al. [122] introduced a peridynamic-informed neural network (PD-ENN) which combines peridynamics with neural networks to predict crack initiation and propagation in brittle materials. The PD-ENN was shown to accurately capture both crack paths and displacement fields, with high prediction accuracy in crack propagation patterns. Furthermore, the energy-informed neural network (EINN) was developed to model solids with cracks by leveraging nonlocal theories and energy principles [123,124]. The EINN framework integrates peridynamic theory, which enables it to handle discontinuous displacements and thermomechanical effects, providing robust modeling of crack propagation even under complex loading conditions. Guo and Song [125] proposed another PINN formulation that embeds the Airy stress function and linear elastic fracture mechanics (LEFM) formulations to recover SIFs and transverse stress directly from measurements. Their method demonstrates strong generalization across mixed-mode I/II/III loading, achieves high accuracy even under noise, and shows potential for real-time structural health monitoring applications.

images images

Figure 16: Representative applications of PINNs in structural integrity and durability: (a) 2D in-plane crack analysis [121], (b) Structural damage identification [94], and (c) A PINN framework for fatigue life prediction [126].

Beyond crack propagation, physics-informed learning has also been applied to vibration-based damage detection and structural health monitoring. Wang et al. [94] proposed a physics-guided residual neural network (PhyResNet), shown in Fig. 16b, that integrates governing equations of structural dynamics into the learning process. Their framework significantly improves damage localization and quantification accuracy under noisy and data-scarce conditions. Similarly, Zhou and Xu [127] further contributed a baseline-free PINN method for damage detection in thin plates using measured flexural guided wavefields. By enforcing the Kirchhoff–Love plate equation, the trained network reconstructs a pseudo-pristine wavefield that represents the undamaged plate, enabling damage-induced anomalies to be isolated by comparing measured and reconstructed responses. The approach identifies damage locations through an energy-based index and demonstrates strong accuracy and noise robustness without requiring labeled data or prior baseline measurements.

While physics-informed learning has shown strong capability in modeling fracture initiation, crack propagation, and damage localization, ensuring structural integrity over an entire service lifetime also requires accurate prediction of fatigue behavior under repeated cyclic loading. Knowledge of fatigue life is fundamental to safety, reliability, and cost-effective design, especially in high-risk sectors such as aerospace, automotive, and maritime engineering, where components are subjected to long-term cyclic stresses. Reliable fatigue assessment supports material and design optimization, informed maintenance scheduling, and life-extension strategies, ultimately reducing costs and preventing unexpected failures. Building on advances in fracture modeling, PINNs have increasingly been adopted for fatigue analysis, offering a physics-consistent and data-efficient framework for predicting fatigue damage evolution and remaining useful life [128132].

Several studies have applied PINNs to fatigue life prediction. A notable example is the use of PINNs to predict creep–fatigue life in 316 stainless steels at elevated temperatures [133]. By incorporating physical laws into the loss function, this method improves accuracy on small datasets and outperforms traditional machine learning and empirical models, demonstrating the advantage of physics-based feature integration. The prediction of fatigue life under mixed-mode loading conditions, a type of problem well-received in engineering components but difficult to model due to complex stress interactions was explored by Salvati et al. [126], as shown in Fig. 16c. The study detailed the formulation of a PINN that incorporates mixed-mode fracture mechanics into its architecture, allowing for more accurate predictions of fatigue crack growth paths than conventional models. Another study introduced a multi-fidelity PINN for fatigue life prediction using only a limited number of experimental samples [134]. By embedding physical models into activation functions, the network effectively handles data of varying fidelity, improving prediction accuracy and outperforming random forest (RF), support vector machine (SVM), and conventional neural networks when data are scarce.

Recent advances have broadened the applicability of PINN-based fatigue modeling across materials, manufacturing processes, and loading regimes. For AM alloys, Abiria et al. [135] developed a hybrid PINN (HPINN) that integrates data-driven artificial neural network (ANN) components with physics-based constraints derived from Basquin’s law, a modified Paris law, and a non-negativity condition encoded as activation functions. Designed for predicting high-cycle fatigue behavior under limited data, the HPINN was validated on AM Al-Mg4.5Mn and Ti-6Al-4V alloys and outperformed conventional ANN and PINN models, achieving predictions within a two-factor scatter band. Dang et al. [136] extended PINNs to pore-driven fatigue in laser-directed energy deposition (L-DED) Ti-6Al-4V, showing that incorporating defect morphology and microstructure significantly improves fatigue response modeling compared to regression-based methods. Feng et al. [137] introduced a probabilistic fatigue framework that couples PINNs with extreme-value statistics, generating fatigue-life distributions that capture uncertainty inherent in AM components. Wang et al. [138] further advanced multi-fidelity fatigue prediction, enabling accurate modeling of defect–fatigue interactions using combined low- and high-fidelity datasets.

These studies demonstrate that PINNs are becoming powerful tools for modeling fracture, damage evolution, and fatigue behavior in structural systems. Their ability to incorporate governing physics, handle sparse and noisy data, represent discontinuities, and quantify uncertainty makes them particularly attractive for safety-critical applications. At the same time, these applications reveal recurring challenges with training stability, discontinuity handling, and scalability, which have motivated the development of specialized formulations, adaptive strategies, and transfer learning. These methodological advances are consolidated and discussed in Section 5.

4.5 Manufacturing Process

In the manufacturing sector, PINNs have gained increasing attention in AM [126,139,140], primarily because they link process-induced effects to the resulting mechanical behavior and structural integrity of solid components. Although AM inherently involves coupled phenomena, the PINN-based studies reviewed here are considered from a solid mechanics perspective, focusing on how fabrication-induced conditions influence porosity, residual stress, fatigue life, and the overall mechanical performance of printed parts. By embedding physical constraints into data-driven learning, PINNs provide an effective framework for evaluating these mechanical outcomes under limited or noisy data.

Beyond the fatigue life prediction challenges in AM materials discussed in the previous subsection [135138], PINNs have been extended to assess defect formation and its impact on mechanical durability. In particular, the approach presented in [140] demonstrated that using simple physics-informed features from manufacturing conditions can predict porosity with accuracy comparable to more complex deep learning models, ultimately enhancing process control and quality in AM. Recent developments further reinforce the significance of physics-informed learning in porosity evaluation. Skiadopoulos et al. [141] introduced a transfer-learning PINN framework that estimates volumetric porosity and pore size in AM AlSi10Mg from ultrasonic data. The model is first pretrained on high-fidelity synthetic ultrasonic responses generated through finite element simulations and subsequently fine-tuned using a limited number of experimental measurements. By embedding scattering-based physical relations into the PINN formulation, the framework enables accurate defect quantification even under sparse labeled datasets.

PINN-based models have also been explored as surrogates to support mechanical performance assessment influenced by manufacturing processes. Faegh et al. [142] presented a path-aware PINN framework for multi-laser metal AM that predicts temperature distributions across various part decompositions and scan-path strategies, as shown in Fig. 17a. By integrating thermal physics into a mesh-free PINN surrogate, this approach enables optimized laser path planning to achieve uniform thermal distributions, thereby reducing residual stress and distortion.

images

Figure 17: Representative applications of PINNs in manufacturing process: (a) Thermal field prediction [142] and (b) Porosity prediction [143].

A broader overview of physics-informed machine learning in AM is provided by Farrag et al. [144], who reviewed physics-informed approaches, including PINNs, for defect evaluation, parameter optimization, and quality monitoring. Their review highlights how physics-informed frameworks can bridge simulation-based knowledge and sparse experimental data to improve the generalizability, interpretability, and efficiency of AM process models.

Physics-informed learning has also been expanded to post-processing and joining techniques closely related to Zhao et al. [145] developed a physics-informed ANN for laser shock peening, accurately predicting residual stresses and microhardness by incorporating shock-wave attenuation mechanics into the model inputs, resulting in significantly higher accuracy than empirical or purely data-driven models. Similarly, Meng et al. [143] introduced a physics-informed deep learning framework for porosity prediction in laser beam welding of aluminum alloys, identifying the dominant physical factors governing defect formation and improving prediction accuracy compared with models relying solely on process parameters, as illustrated in Fig. 17b.

These studies demonstrate that PINNs and physics-informed machine learning are becoming essential tools in manufacturing-related solid mechanics. Their ability to predict fatigue life, characterize porosity and defects, model thermomechanical histories, and support process optimization highlights their value for next-generation digital manufacturing. At the same time, these applications reveal challenges with scalability, multiphysics coupling, and real-time deployment, which have motivated the development of specialized formulations, transfer-learning strategies, and hybrid solvers. These methodological advances are consolidated and discussed in Section 5.

5  Technical Advances in PINNs for Solid and Structural Mechanics

While Section 4 focused on how PINNs have been applied across different classes of solid and structural mechanics problems, this section consolidates methodological and algorithmic advances that generalize across these applications. Rather than isolated developments, these advances can be largely understood as targeted responses to recurring numerical and physical challenges specific to solid mechanics, such as high-order governing equations, localized fields, material heterogeneity, interface treatment, discontinuity handling, complex boundary conditions and geometries, measurement noise, and limited experimental data. By framing recent PINN developments around the specific problems, they aim to solve, this section highlights how physics-informed learning is evolving from a generic neural modeling paradigm into a specialized computational tool for solid and structural mechanics.

From this perspective, adaptive training strategies aim to resolve localized features such as stress concentrations and crack-tip singularities; advanced loss formulations and activation designs target stiff and multi-scale governing equations; domain decomposition methods enable accurate treatment of heterogeneity and interfaces; hard-constraint formulations improve boundary-condition enforcement; and multi-fidelity, transfer-learning, and FEM-integrated approaches address data scarcity, scalability, and industrial applicability. This problem-driven organization clarifies the functional role of each methodological advance.

5.1 Adaptive Training and Residual-Based Refinement

Many solid mechanics problems exhibit strong spatial localization, such as stress concentrations, crack-tip singularities, boundary layers, and sharp material interfaces. These features are difficult to resolve with uniform collocation strategies, often leading to slow convergence and poor accuracy in standard PINNs. To address this limitation, recent research has focused on adaptive training strategies that dynamically redistribute learning effort toward regions where the governing physics is most difficult to satisfy.

Instead of relying on fixed collocation point distributions and uniform loss weighting, adaptive methods identify regions with large residuals, typically corresponding to steep gradients, discontinuities, or highly nonlinear responses, and selectively refine the sampling density or increase the corresponding loss penalties [58]. By concentrating model capacity where it is most needed, these strategies significantly improve the resolution of localized phenomena and enhance stability in elasticity, fracture, and multiscale problems.

5.2 Loss Function and Activation Function Design

Governing equations in solid mechanics are often stiff, multi-scale, and dimensionally heterogeneous, leading to large imbalances among residual terms in standard PINN loss functions. These imbalances can cause training instability, slow convergence, and biased solutions. To address this issue, recent studies have focused on designing physics-aware, automatically balanced loss formulations.

A notable example is the least squares weighted residual (LSWR) loss proposed by Bai et al. [74], which introduces dimensionless scaling and weighted-residual integration to mitigate inconsistencies among physical terms. This approach has been shown to significantly improve the accuracy of displacement and stress predictions in two- and three-dimensional elasticity problems. Beyond loss formulation, the design of activation functions has also been shown to influence optimization stability. Zhang and Ding [146] demonstrated that adaptive combinations of smooth activation functions can accelerate convergence and improve solution accuracy across a range of linear and nonlinear solid mechanics problems. Together, these developments enhance the robustness of PINNs in stiffness-dominated and multi-scale settings.

5.3 Domain Decomposition and Subdomain Learning

Many solid mechanics problems involve strong material heterogeneity, multi-phase domains, sharp interfaces, and multi-scale behavior. In such settings, global neural representations often struggle to capture localized features without excessive network capacity. To address this limitation, domain decomposition and subdomain learning strategies have been introduced.

By partitioning the computational domain into subregions, each represented by a separate neural network, PINNs can learn localized behaviors such as wave interactions, interfacial effects, and sharp gradients while maintaining global consistency through interface constraints. This strategy has been shown to be effective in composite materials [83,100] and micromechanical systems [58]. More broadly, subdomain learning mitigates spectral bias and capacity limitations, making PINNs more suitable for large-scale and heterogeneous solid mechanics simulations.

5.4 Boundary-Condition Enforcement and Hard Constraints

In solid mechanics, small boundary errors can propagate into large inaccuracies in stress and strain, particularly in stiff or high-order systems. Traditional PINNs impose boundary conditions through soft penalty terms, which can cause boundary drift, slow convergence, or inaccurate stress recovery.

To mitigate this issue, hard-constraint formulations have been introduced, in which boundary conditions are enforced a priori through the construction of the trial solution rather than penalized during optimization. Distance-function-based approaches embed Dirichlet boundary conditions analytically into the neural network approximation by multiplying the network output with a geometry-aware distance function that vanishes exactly on the boundary, thereby eliminating the need for explicit penalty terms and significantly improving accuracy and training stability [53]. This strategy not only guarantees exact satisfaction of essential boundary conditions but also simplifies the loss function to focus exclusively on governing-equation residuals in the interior domain. Similarly, exact Dirichlet PINNs (EPINNs) enforce boundary conditions directly within the network architecture, reducing computational overhead and accelerating convergence [75]. The parametric extended PINN (P-XPINN) framework proposed by Cao and Wang [147] further combines hard constraints with subdomain decomposition, enabling robust handling of irregular geometries and mixed boundary conditions.

5.5 Multi-Fidelity Learning and Transfer Learning

Experimental data in solid mechanics are often sparse, noisy, or costly to acquire, particularly for long-term degradation, fatigue, and failure processes. This scarcity limits the applicability of purely data-driven or single-fidelity PINN models. To overcome this limitation, multi-fidelity and transfer-learning strategies have been developed.

Multi-fidelity PINNs combine inexpensive low-fidelity physics models with sparse high-fidelity data, significantly improving prediction accuracy while reducing computational cost [134,138]. Complementarily, transfer learning accelerates convergence by pretraining models on synthetic or simulation-generated datasets and subsequently fine-tuning them using limited experimental data. This paradigm has been demonstrated in porosity estimation for AM components [141] and long-duration structural dynamics analyses [148].

The effectiveness of transfer learning within PINN frameworks has been further demonstrated in solid mechanics applications [14,35]. In these approaches, PINN models are first trained on simplified scenarios that are easier to converge and then fine-tuned under more complex loading or geometric conditions that would otherwise be difficult to learn directly. By reusing previously learned physics-informed representations, transfer learning enhances robustness, improves data efficiency, and reduces training time in practical engineering problems with limited measurements. Collectively, these strategies enable PINNs to generalize across geometries, loading conditions, and materials while preserving physical consistency.

5.6 Hybridization with Finite Element Methods

Despite their flexibility, standalone PINNs often face challenges related to robustness, scalability, and industrial trust. For structural systems composed of multiple components, separate neural networks are typically required to represent individual structural members, while compatibility and equilibrium conditions must be enforced at joints [77]. When discontinuities arise at interfaces, such as jumps in internal forces, material properties, or stiffness, these discontinuities must be known or estimated a priori and explicitly enforced through additional loss terms. This requirement complicates training and often leads to slow convergence or reduced stability. Moreover, classical PINN formulations are most naturally defined for simple geometries, such as straight beams or plates, and may struggle to represent complex domains with internal boundaries, notches, or irregular topologies.

In contrast, conventional numerical methods such as FEM can naturally handle complex geometries, multiple components, and interface discontinuities through discretization and weak-form formulations, without requiring explicit prior knowledge of jump magnitudes. To bridge the gap between physics-informed learning and established engineering practice, recent research has increasingly focused on hybrid PINN–FEM frameworks, which aim to combine the geometric flexibility and numerical robustness of FEM with the data efficiency and interpretability of physics-informed learning [149151].

Finite element–integrated neural network formulations embed weak-form discretization directly into the PINN architecture, enabling stable and accurate forward simulations [36] as well as efficient inverse identification of material properties [34] in both elastic and elastoplastic solids. More recent extensions incorporate transfer learning into the finite element–integrated neural network formulation, where pretrained forward models using coarse meshes are employed to initialize analyses on finer meshes, yielding substantially faster convergence and improved robustness [35]. These developments demonstrate that FEM-inspired formulations can significantly enhance the stability and scalability of PINNs while preserving their differentiable and physics-consistent structure.

A further limitation of classical PINNs is their difficulty in representing complex finite geometries, topological connectivity, and internal boundaries. To address these challenges, a recent hybrid approach termed Finite-PINN incorporates finite-domain geometric information through geometric encoding based on the eigenfunctions of the Laplace–Beltrami operator [152]. By augmenting Euclidean coordinates with topology-aware basis functions, this framework enables PINNs to more faithfully represent the geometry and connectivity of solid structures, improving robustness and convergence for problems involving notches, internal boundaries, and complex domains.

Inspired by variational principles in FEM, the parametric deep energy method (P-DEM) formulates PINN training as the minimization of total potential energy in a parametric reference domain mapped to the physical domain via NURBS-based geometry representations [153]. This approach enables accurate treatment of complex geometries, efficient numerical integration, and natural enforcement of Neumann boundary conditions, while producing smooth displacement fields suitable for higher-order continuum theories. Together, these geometry-aware and energy-based hybrid formulations alleviate key geometric and stability limitations of classical PINNs while preserving their mesh-free and solver-free characteristics.

Overall, hybrid PINN–FEM approaches provide a promising pathway toward industrial adoption by combining the numerical robustness, geometric fidelity, and weak-form stability of classical solvers with the flexibility and differentiability of physics-informed learning. These frameworks enable PINNs to move beyond simplified benchmark problems and toward realistic, large-scale engineering applications involving complex geometries, discontinuities, and heterogeneous materials.

6  Prospects and Challenges

Physics-informed neural networks have rapidly evolved from simple PDE-residual formulations into a diverse family of methods, including variational PINNs, enhanced constitutive networks, domain-decomposition strategies, multi-fidelity architectures, and hybrid FEM–PINN frameworks. Together, these developments substantially expand the scope of PINNs in solid and structural mechanics and are expected to further extend the applicability of physics-informed methodologies to nonlinear, multiscale, and data-scarce engineering problems.

Viewed collectively, these methodological advances can be interpreted as targeted solutions to the core numerical and physical challenges of solid and structural mechanics. Adaptive training resolves localization and singularities; balanced loss formulations improve stability in stiff and multi-scale systems; domain decomposition enables accurate treatment of heterogeneity and interfaces; hard-constraint formulations ensure physically consistent boundary enforcement; and multi-fidelity and FEM-integrated strategies address data scarcity, scalability, and engineering trust. This problem-driven evolution highlights how PINNs are transitioning from generic neural solvers into specialized, physics-aware computational tools for solid and structural mechanics.

Looking ahead, several promising opportunities emerge. A major research trajectory involves developing scalable PINN frameworks for large-scale 3D structural systems. Domain-decomposition strategies have shown strong potential for reducing computational costs and improving local accuracy, particularly in simulations characterized by steep gradients, material interfaces, or multi-region physics. When integrated with adaptive mesh refinement or data-driven subdomain partitioning, these strategies could enable efficient and robust modeling of full-scale engineering structures. Variational, weak-form, and FEM-integrated formulations are also expected to play an increasingly pivotal role in future computational mechanics. These approaches mitigate the need for high-order derivatives, improve numerical conditioning, and naturally incorporate constitutive models and finite-element operators. As a result, they offer a viable pathway toward industrial-scale solvers capable of addressing various complex problems without requiring dense collocation sets.

Another important direction is the development of task-specific PINN architectures tailored to key classes of problems in solid mechanics, such as multiscale homogenization, fracture and fatigue, structural health monitoring, and topology optimization. Existing studies show that when the architecture, feature embeddings, and loss design are carefully aligned with the governing physics, PINNs can effectively capture complex phenomena, such as size effects, strong material heterogeneity, large deformations, and defect-driven fatigue. Establishing systematic, domain-specific design principles, covering trial-space selection, regularization strategies, and boundary-condition enforcement, is likely to enhance reliability and reduce the current dependence on manual hyperparameter tuning.

A growing area of opportunity lies in the integration of physics-informed surrogates into digital twins, real-time monitoring, and decision-support systems. Surrogate PINNs trained on combined simulation data and sparse measurements [22,127] have already demonstrated strong capabilities in predicting stress, strain, and damage fields. When coupled with multi-fidelity and transfer-learning techniques, these surrogates can be rapidly adapted across different geometries, loading conditions, and materials, making them well-suited for structural integrity assessment, condition-based maintenance, and process control in manufacturing. Expanding research on uncertainty quantification and reliability highlights the increasingly important role of physics-informed learning in safety-critical applications. By embedding physics within the predictive model while providing calibrated uncertainty measures, these approaches offer a pathway toward trustworthy and certifiable surrogates suitable for design codes, reliability assessment, and risk-informed engineering decision making.

Despite these promising prospects, several fundamental challenges remain. The primary limitation is computational cost and scalability: training PINNs for large-scale 3D domains [154,155], long-time dynamics, or fine multiscale features remain expensive. Although domain decomposition, multi-fidelity training, and FE-integrated strategies mitigate some of these costs, further advances in parallelization, adaptive sampling, model reduction, and efficient automatic differentiation on modern hardware are still needed. Another critical challenge is training robustness and hyperparameter sensitivity. PINN performance strongly depends on architecture design, activation functions, loss formulation, and weighting between data and physics terms. While improved loss functions and adaptive strategies have enhanced robustness, no unified guidelines exist for selecting these components across different solid mechanics problems. Systematic weighting schemes, diagnostic tools, and automated training protocols remain important open areas. Handling discontinuities and localized phenomena constitutes another major challenge. Cracks, interfaces, contact, and sharp material transitions introduce non-smooth fields that classical PINNs struggle to represent. Moreover, experimental validation and standardized benchmarks remain limited. Many studies rely primarily on synthetic or small-scale examples; rigorous comparisons against high-fidelity FEM simulations and controlled experiments are critical for establishing trustworthiness and enabling adoption in engineering practice.

Recently, several efforts have been performed to combine PINNs with the mature FEM [34,36,150,152]. This approach offers several fruitful features to address several limitations of PINNs, e.g., analysis with irregular domains and enhanced computational costs. Furthermore, this approach also provides great potential for the incorporation of PINNs into commercial finite element packages. If this can be done, applications of PINNs are expected to be exponential since the bottleneck of computational costs can be resolved.

Furthermore, gradients of the loss function with respect to the trainable weights and biases are required if the neural networks are trained by gradient-based optimization algorithms. In this scenario, the automatic differentiation is widely employed owing to its precision. Unfortunately, automatic differentiation needs to be performed for every iteration, and this is a computational burden. Typically, the training process involves thousands of iterations, and if symbolic expressions of the gradients are obtained once at the initial stage of the training process, they can be reused for thousand times. The executing time for symbolic expressions is probably much shorter than that of performing automatic differentiation for thousands of iterations. Further endeavors are essential to explore the potential advantages of this approach.

7  Conclusion

This review offers a comprehensive, domain-focused synthesis of recent progress in PINNs for computational solid and structural mechanics. By integrating bibliometric analysis with methodological and application-oriented perspectives, this work moves beyond general surveys of PINNs and clarifies how physics-informed learning is adapted to address mechanics-specific challenges.

The bibliometric analysis shows a marked increase in publications since 2018, with dominant research clusters focused on numerical modeling, structural analysis, and forecasting, driven by strong global contributions from major research communities in China, the United States, and Europe. These trends indicate that PINNs are transitioning from exploratory investigations toward more mature, application-driven studies, alongside the development of increasingly advanced methodologies.

Based on this landscape, the reviewed literature was organized into five major application domains, including forward structural analysis, parameter identification, structural and topology optimization, structural integrity and durability, and manufacturing-related problems. Across these areas, PINNs have demonstrated strong potential for continuous-field prediction, inverse inference from sparse data, solver-free or solver-accelerated optimization, and the modeling of fracture, fatigue, and process–structure–property relationships.

A central contribution of this review is its problem-driven synthesis of recent methodological advances. Adaptive training strategies address localized features and singularities; balanced loss formulations improve stability in stiff and multiscale systems; domain decomposition enables the treatment of heterogeneity and interfaces; hard-constraint formulations enhance boundary-condition enforcement; and multi-fidelity, transfer-learning, and FEM-integrated approaches mitigate data scarcity, scalability limitations, and concerns regarding engineering reliability. Together, these developments highlight the transition of PINNs from generic neural solvers to specialized, physics-aware tools for solid and structural mechanics.

Despite this progress, key challenges remain, including high computational cost, sensitivity to hyperparameters, limited robustness in the presence of discontinuities, and the lack of standardized benchmarks and experimental validation. Looking forward, promising directions include large-scale 3D modeling, task-specific architecture design, uncertainty-aware formulations, digital twin integration, and industrial deployment. With continued methodological refinement and rigorous validation, PINNs have the potential to become a core component of next-generation computational mechanics, complementing traditional solvers while enabling new capabilities in inverse modeling, real-time inference, and data-constrained simulation.

Acknowledgement: This research work is partially funded by Chiang Mai University (CMU), Thailand. The first author gratefully acknowledges the support from the CMU Proactive Researcher Program. In addition, ChatGPT 5.2 was used as an AI-assisted tool to summarize key ideas from the literature for drafting purposes and to improve clarity in selected sections. The authors have carefully reviewed, revised, and finalized all content and take full responsibility for the final published work.

Funding Statement: This project is partially funded by National Research Council of Thailand (contract No. N42A671047). Additionally, this research is a part of the project “A Strategic Roadmap Toward the Next Level of Intelligent, Sustainable, and Human-Centered SME: SME 5.0” from the European Union’s Horizon 2021 research and innovation program under the Marie Skłodowska-Curie Grant agreement No. 101086487.

Author Contributions: Itthidet Thawon: Data curation, Formal analysis, Methodology, Investigation, Software, Writing—original draft, Writing—review and editing. Duy Vo: Formal analysis, Investigation, Validation, Writing—review and editing. Tinh Quoc Bui: Resources, Supervision. Kanya Rattanamongkhonkun: Writing—review and editing. Chakkapong Chamroon: Writing—review and editing. Nakorn Tippayawong: Funding acquisition, Supervision. Ramnarong Wanison: Software, Visualization. Yuttana Mona: Data curation, Visualization. Pana Suttakul: Conceptualization, Methodology, Project administration, Validation, Writing—review and editing. All authors reviewed and approved the final version of the manuscript.

Availability of Data and Materials: All necessary data are included in this article. Additional data are available from the corresponding author upon reasonable request.

Ethics Approval: Not applicable.

Conflicts of Interest: The authors declare no conflicts of interest.

References

1. Ng WL, Goh GL, Goh GD, Ten JSJ, Yeong WY. Progress and opportunities for machine learning in materials and processes of additive manufacturing. Adv Mater. 2024;36(34):2310006. doi:10.1002/adma.202310006. [Google Scholar] [PubMed] [CrossRef]

2. Thawon I, Suttakul P, Wanison R, Mona Y, Tippayawong KY, Tippayawong N. Integrating explainable artificial intelligence in machine learning models to enhance the interpretation of elastic behaviors in three-dimensional-printed triangular lattice plates. Eng Appl Artif Intell. 2025;144(32):110148. doi:10.1016/j.engappai.2025.110148. [Google Scholar] [CrossRef]

3. Suttakul P, Vo D, Fongsamootr T, Wanison R, Mona Y, Katongtung T, et al. The role of machine learning for insight into the material behavior of lattices: a surrogate model based on data from finite element simulation. Results Eng. 2024;23(2):102547. doi:10.1016/j.rineng.2024.102547. [Google Scholar] [CrossRef]

4. Wang Q, Yao G, Kong G, Wei L, Yu X, Zeng J, et al. A data-driven model for predicting fatigue performance of high-strength steel wires based on optimized XGBOOST. Eng Fail Anal. 2024;164(11):108710. doi:10.1016/j.engfailanal.2024.108710. [Google Scholar] [CrossRef]

5. Ni B, Gao H. A deep learning approach to the inverse problem of modulus identification in elasticity. MRS Bull. 2021;46(1):19–25. doi:10.1557/s43577-020-00006-y. [Google Scholar] [CrossRef]

6. Frank M, Drikakis D, Charissis V. Machine-learning methods for computational science and engineering. Computation. 2020;8(1):15. doi:10.3390/computation8010015. [Google Scholar] [CrossRef]

7. Choudhary K, DeCost B, Chen C, Jain A, Tavazza F, Cohn R, et al. Recent advances and applications of deep learning methods in materials science. npj Comput Mater. 2022;8(1):59. doi:10.1038/s41524-022-00734-6. [Google Scholar] [CrossRef]

8. Raissi M, Karniadakis GE. Hidden physics models: machine learning of nonlinear partial differential equations. J Comput Phys. 2018;357(4):125–41. doi:10.1016/j.jcp.2017.11.039. [Google Scholar] [CrossRef]

9. Wang C, Li S. On variational Bayesian inference theory of hyperelasticity for inversely recovering nonlinear continuum deformation mappings. Comput Meth Appl Mech Eng. 2026;448(2):118448. doi:10.1016/j.cma.2025.118448. [Google Scholar] [CrossRef]

10. Wang C, Li S. A variational Bayesian inference theory of elasticity and its mixed probabilistic finite element method for inverse deformation solutions in any dimension. IEEE Trans Pattern Anal Mach Intell. 2025;47(6):4505–16. doi:10.1109/tpami.2025.3542423. [Google Scholar] [PubMed] [CrossRef]

11. Cuomo S, Di Cola VS, Giampaolo F, Rozza G, Raissi M, Piccialli F. Scientific machine learning through physics-informed neural networks: where we are and what’s next. J Sci Comput. 2022;92(3):88. doi:10.1007/s10915-022-01939-z. [Google Scholar] [CrossRef]

12. Moayedi RZ, Abbaszadeh M, Dehghan M. CuPINN: optimizing PINNs through curvature minimization and residual landscape flattening. Comput Meth Appl Mech Eng. 2025;445(5):118180. doi:10.1016/j.cma.2025.118180. [Google Scholar] [CrossRef]

13. Raissi M, Perdikaris P, Karniadakis GE. Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J Comput Phys. 2019;378:686–707. doi:10.1016/j.jcp.2018.10.045. [Google Scholar] [CrossRef]

14. Xu C, Cao BT, Yuan Y, Meschke G. Transfer learning based physics-informed neural networks for solving inverse problems in engineering structures under different loading scenarios. Comput Meth Appl Mech Eng. 2023;405(7553):115852. doi:10.1016/j.cma.2022.115852. [Google Scholar] [CrossRef]

15. Batuwatta-Gamage CP, Rathnayaka C, Karunasena HCP, Jeong H, Karim A, Gu YT. A novel physics-informed neural networks approach (PINN-MT) to solve mass transfer in plant cells during drying. Biosyst Eng. 2023;230(1):219–41. doi:10.1016/j.biosystemseng.2023.04.012. [Google Scholar] [CrossRef]

16. Chen XX, Zhang P, Yin ZY. Physics-Informed neural network solver for numerical analysis in geoengineering. Georisk Assess Manag Risk Eng Syst Geohazards. 2024;18(1):33–51. doi:10.1080/17499518.2024.2315301. [Google Scholar] [CrossRef]

17. Marian M, Tremmel S. Physics-informed machine learning—An emerging trend in tribology. Lubricants. 2023;11(11):463. doi:10.3390/lubricants11110463. [Google Scholar] [CrossRef]

18. Harp DR, O’Malley D, Yan B, Pawar R. On the feasibility of using physics-informed machine learning for underground reservoir pressure management. Expert Syst Appl. 2021;178(1):115006. doi:10.1016/j.eswa.2021.115006. [Google Scholar] [CrossRef]

19. Stiasny J, Chatzivasileiadis S. Physics-informed neural networks for time-domain simulations: accuracy, computational cost, and flexibility. Electr Power Syst Res. 2023;224(2):109748. doi:10.1016/j.epsr.2023.109748. [Google Scholar] [CrossRef]

20. Li W, Bazant MZ, Zhu J. A physics-guided neural network framework for elastic plates: comparison of governing equations-based and energy-based approaches. Comput Meth Appl Mech Eng. 2021;383(34):113933. doi:10.1016/j.cma.2021.113933. [Google Scholar] [CrossRef]

21. Clark Di Leoni P, Agarwal K, Zaki TA, Meneveau C, Katz J. Reconstructing turbulent velocity and pressure fields from under-resolved noisy particle tracks using physics-informed neural networks. Exp Fluids. 2023;64(5):95. doi:10.1007/s00348-023-03629-4. [Google Scholar] [CrossRef]

22. de O Teloli R, Tittarelli R, Bigot M, Coelho L, Ramasso E, Le Moal P, et al. A physics-informed neural networks framework for model parameter identification of beam-like structures. Mech Syst Signal Process. 2025;224(4):112189. doi:10.1016/j.ymssp.2024.112189. [Google Scholar] [CrossRef]

23. Fallah A, Aghdam MM. Physics-informed neural network for bending and free vibration analysis of three-dimensional functionally graded porous beam resting on elastic foundation. Eng Comput. 2024;40(1):437–54. doi:10.1007/s00366-023-01799-7. [Google Scholar] [CrossRef]

24. Yin X, Huang Z, Liu Y. Bridge damage identification under the moving vehicle loads based on the method of physics-guided deep neural networks. Mech Syst Signal Process. 2023;190(1):110123. doi:10.1016/j.ymssp.2023.110123. [Google Scholar] [CrossRef]

25. Sharma P, Chung WT, Akoush B, Ihme M. A review of physics-informed machine learning in fluid mechanics. Energies. 2023;16(5):2343. doi:10.3390/en16052343. [Google Scholar] [CrossRef]

26. Faroughi SA, Raissi M, Das S, Kalantari NK, Kourosh Mahjour S. Physics-guided, physics-informed, and physics-encoded neural networks and operators in scientific computing: fluid and solid mechanics. J Comput Inf Sci Eng. 2024;24(4):040802. doi:10.1115/1.4064449. [Google Scholar] [CrossRef]

27. Huang B, Wang J. Applications of physics-informed neural networks in power systems—a review. IEEE Trans Power Syst. 2023;38(1):572–88. doi:10.1109/tpwrs.2022.3162473. [Google Scholar] [CrossRef]

28. Liu X, Zhang X, Peng W, Zhou W, Yao W. A novel meta-learning initialization method for physics-informed neural networks. Neural Comput Appl. 2022;34(17):14511–34. doi:10.1007/s00521-022-07294-2. [Google Scholar] [CrossRef]

29. Son H, Cho SW, Hwang HJ. Enhanced physics-informed neural networks with Augmented Lagrangian relaxation method (AL-PINNs). Neurocomputing. 2023;548(1):126424. doi:10.1016/j.neucom.2023.126424. [Google Scholar] [CrossRef]

30. Xu S, Yan C, Zhang G, Sun Z, Huang R, Ju S, et al. Spatiotemporal parallel physics-informed neural networks: a framework to solve inverse problems in fluid mechanics. Phys Fluids. 2023;35(6):065141. doi:10.1063/5.0155087. [Google Scholar] [CrossRef]

31. Zheng B, Li T, Qi H, Gao L, Liu X, Yuan L. Physics-informed machine learning model for computational fracture of quasi-brittle materials without labelled data. Int J Mech Sci. 2022;223(6481):107282. doi:10.1016/j.ijmecsci.2022.107282. [Google Scholar] [CrossRef]

32. Kodakkal A, Meethal R, Obst B, Wüchner R. A finite element method—informed neural network for uncertainty quantification. In: Proceedings of the 14th WCCM-ECCOMAS Congress; 2020 Jul 19–24; Paris, France. doi:10.23967/wccm-eccomas.2020.017. [Google Scholar] [CrossRef]

33. Le-Duc T, Nguyen-Xuan H, Lee J. A finite-element-informed neural network for parametric simulation in structural mechanics. Finite Elem Anal Des. 2023;217(3):103904. doi:10.1016/j.finel.2022.103904. [Google Scholar] [CrossRef]

34. Xu K, Zhang N, Yin ZY, Li K. Finite element-integrated neural network for inverse analysis of elastic and elastoplastic boundary value problems. Comput Meth Appl Mech Eng. 2025;436(3):117695. doi:10.1016/j.cma.2024.117695. [Google Scholar] [CrossRef]

35. Zhang N, Xu K, Yin ZY, Li KQ. Transfer learning-enhanced finite element-integrated neural networks. Int J Mech Sci. 2025;290(3):110075. doi:10.1016/j.ijmecsci.2025.110075. [Google Scholar] [CrossRef]

36. Zhang N, Xu K, Yin Z, Li KQ, Jin YF. Finite element-integrated neural network framework for elastic and elastoplastic solids. Comput Meth Appl Mech Eng. 2025;433(1):117474. doi:10.1016/j.cma.2024.117474. [Google Scholar] [CrossRef]

37. Donthu N, Kumar S, Mukherjee D, Pandey N, Lim WM. How to conduct a bibliometric analysis: an overview and guidelines. J Bus Res. 2021;133(5):285–96. doi:10.1016/j.jbusres.2021.04.070. [Google Scholar] [CrossRef]

38. Passas I. Bibliometric analysis: the main steps. Encyclopedia. 2024;4(2):1014–25. doi:10.3390/encyclopedia4020065. [Google Scholar] [CrossRef]

39. Kirby A. Exploratory bibliometrics: using VOSviewer as a preliminary research tool. Publications. 2023;11(1):10. doi:10.3390/publications11010010. [Google Scholar] [CrossRef]

40. Effendi DN, Anggraini W, Jatmiko A, Rahmayanti H, Ichsan IZ, Rahman MM, editors. Bibliometric analysis of scientific literacy using VOS viewer: Analysis of science education. J Phys Conf Ser. 2021;1796:012096. doi:10.1088/1742-6596/1796/1/012096. [Google Scholar] [CrossRef]

41. Pawlik L, Wilk-Jakubowski JL, Frej D, Wilk-Jakubowski G. Applications of computational mechanics methods combined with machine learning and neural networks: a systematic review (2015-2025). Appl Sci. 2025;15(19):10816. doi:10.3390/app151910816. [Google Scholar] [CrossRef]

42. Su M, Peng H, Li S. A visualized bibliometric analysis of mapping research trends of machine learning in engineering (MLE). Expert Syst Appl. 2021;186(7):115728. doi:10.1016/j.eswa.2021.115728. [Google Scholar] [CrossRef]

43. Narayan S, Menacer B, Kaisan MU, Samuel J, Al-Lehaibi M, Mahroogi FO, et al. Global research trends in biomimetic lattice structures for energy absorption and deformation: a bibliometric analysis (2020–2025). Biomimetics. 2025;10(7):477. doi:10.3390/biomimetics10070477. [Google Scholar] [PubMed] [CrossRef]

44. Rahme P. A bibliometric analysis on drilling of composite materials using a systematic approach. Heliyon. 2024;10(17):e37282. doi:10.1016/j.heliyon.2024.e37282. [Google Scholar] [PubMed] [CrossRef]

45. Rifai AI, Angtony D, Saputra AJ, Prasetijo J. A bibliometric analysis of international structural engineering standards using VOS viewer. Eng Proc. 2025;84(1):75. doi:10.3390/engproc2025084075. [Google Scholar] [CrossRef]

46. Öztürk O, Kocaman R, Kanbach DK. How to design bibliometric research: an overview and a framework proposal. Rev Manag Sci. 2024;18(11):3333–61. doi:10.1007/s11846-024-00738-0. [Google Scholar] [CrossRef]

47. Pranckutė R. Web of science (WoS) and Scopus: the titans of bibliographic information in today’s academic world. Publications. 2021;9(1):12. doi:10.3390/publications9010012. [Google Scholar] [CrossRef]

48. Gusenbauer M. Search where you will find most: comparing the disciplinary coverage of 56 bibliographic databases. Scientometrics. 2022;127(5):2683–745. doi:10.1007/s11192-022-04289-7. [Google Scholar] [PubMed] [CrossRef]

49. Singh VK, Singh P, Karmakar M, Leta J, Mayr P. The journal coverage of Web of Science, Scopus and Dimensions: a comparative analysis. Scientometrics. 2021;126(6):5113–42. doi:10.1007/s11192-021-03948-5. [Google Scholar] [CrossRef]

50. Haghighat E, Raissi M, Moure A, Gomez H, Juanes R. A physics-informed deep learning framework for inversion and surrogate modeling in solid mechanics. Comput Meth Appl Mech Eng. 2021;379(7553):113741. doi:10.1016/j.cma.2021.113741. [Google Scholar] [CrossRef]

51. Goswami S, Anitescu C, Chakraborty S, Rabczuk T. Transfer learning enhanced physics informed neural network for phase-field modeling of fracture. Theor Appl Fract Mech. 2020;106:102447. doi:10.1016/j.tafmec.2019.102447. [Google Scholar] [CrossRef]

52. Rao C, Sun H, Liu Y. Physics-informed deep learning for computational elastodynamics without labeled data. J Eng Mech. 2021;147(8):04021043. doi:10.1061/(asce)em.1943-7889.0001947. [Google Scholar] [CrossRef]

53. Sukumar N, Srivastava A. Exact imposition of boundary conditions with distance functions in physics-informed deep neural networks. Comput Meth Appl Mech Eng. 2022;389(5):114333. doi:10.1016/j.cma.2021.114333. [Google Scholar] [CrossRef]

54. Nabian MA, Gladstone RJ, Meidani H. Efficient training of physics-informed neural networks via importance sampling. Computer Aided Civil Eng. 2021;36(8):962–77. doi:10.1111/mice.12685. [Google Scholar] [CrossRef]

55. Shen S, Lu H, Sadoughi M, Hu C, Nemani V, Thelen A, et al. A physics-informed deep learning approach for bearing fault detection. Eng Appl Artif Intell. 2021;103(2):104295. doi:10.1016/j.engappai.2021.104295. [Google Scholar] [CrossRef]

56. Wang H, Li B, Gong J, Xuan FZ. Machine learning-based fatigue life prediction of metal materials: perspectives of physics-informed and data-driven hybrid methods. Eng Fract Mech. 2023;284:109242. doi:10.1016/j.engfracmech.2023.109242. [Google Scholar] [CrossRef]

57. Meng Z, Qian Q, Xu M, Yu B, Yıldız AR, Mirjalili S. PINN-FORM: a new physics-informed neural network for reliability analysis with partial differential equation. Comput Meth Appl Mech Eng. 2023;414:116172. doi:10.1016/j.cma.2023.116172. [Google Scholar] [CrossRef]

58. Henkes A, Wessels H, Mahnken R. Physics informed neural networks for continuum micromechanics. Comput Meth Appl Mech Eng. 2022;393(4):114790. doi:10.1016/j.cma.2022.114790. [Google Scholar] [CrossRef]

59. Luong KA, Le-Duc T, Lee J. Automatically imposing boundary conditions for boundary value problems by unified physics-informed neural network. Eng Comput. 2024;40(3):1717–39. doi:10.1007/s00366-023-01871-2. [Google Scholar] [CrossRef]

60. Wang S, Yu X, Perdikaris P. When and why PINNs fail to train: a neural tangent kernel perspective. J Comput Phys. 2022;449(6481):110768. doi:10.1016/j.jcp.2021.110768. [Google Scholar] [CrossRef]

61. Wang S, Teng Y, Perdikaris P. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. SIAM J Sci Comput. 2021;43(5):A3055–81. doi:10.1137/20m1318043. [Google Scholar] [CrossRef]

62. Wang X, Yin ZY, Wu W, Zhu HH. Differentiable finite element method with Galerkin discretization for fast and accurate inverse analysis of multidimensional heterogeneous engineering structures. Comput Meth Appl Mech Eng. 2025;437(5):117755. doi:10.1016/j.cma.2025.117755. [Google Scholar] [CrossRef]

63. Gu Y, Zhang C, Golub MV. Physics-informed neural networks for analysis of 2D thin-walled structures. Eng Anal Bound Elem. 2022;145(5):161–72. doi:10.1016/j.enganabound.2022.09.024. [Google Scholar] [CrossRef]

64. Haghighat E, Bekar AC, Madenci E, Juanes R. A nonlocal physics-informed deep learning framework using the peridynamic differential operator. Comput Meth Appl Mech Eng. 2021;385(7553):114012. doi:10.1016/j.cma.2021.114012. [Google Scholar] [CrossRef]

65. Niu S, Zhang E, Bazilevs Y, Srivastava V. Modeling finite-strain plasticity using physics-informed neural network and assessment of the network performance. J Mech Phys Solids. 2023;172(23):105177. doi:10.1016/j.jmps.2022.105177. [Google Scholar] [CrossRef]

66. Abueidda DW, Lu Q, Koric S. Meshless physics-informed deep learning method for three-dimensional solid mechanics. Numerical Meth Engineering. 2021;122(23):7182–201. doi:10.1002/nme.6828. [Google Scholar] [CrossRef]

67. Wang L, Liu G, Wang G, Zhang K. M-PINN: a mesh-based physics-informed neural network for linear elastic problems in solid mechanics. Numerical Meth Engineering. 2024;125(9):e7444. doi:10.1002/nme.7444. [Google Scholar] [CrossRef]

68. Abueidda DW, Koric S, Guleryuz E, Sobh NA. Enhanced physics-informed neural networks for hyperelasticity. Numerical Meth Engineering. 2023;124(7):1585–601. doi:10.1002/nme.7176. [Google Scholar] [CrossRef]

69. Ghaderi A, Morovati V, Dargazany R. A physics-informed assembly of feed-forward neural network engines to predict inelasticity in cross-linked polymers. Polymers. 2020;12(11):2628. doi:10.3390/polym12112628. [Google Scholar] [PubMed] [CrossRef]

70. Roy AM, Guha S. A data-driven physics-constrained deep learning computational framework for solving von Mises plasticity. Eng Appl Artif Intell. 2023;122(8):106049. doi:10.1016/j.engappai.2023.106049. [Google Scholar] [CrossRef]

71. Haghighat E, Abouali S, Vaziri R. Constitutive model characterization and discovery using physics-informed deep learning. Eng Appl Artif Intell. 2023;120(12):105828. doi:10.1016/j.engappai.2023.105828. [Google Scholar] [CrossRef]

72. Roy AM, Guha S, Sundararaghavan V, Arróyave R. Physics-infused deep neural network for solution of non-associative Drucker-Prager elastoplastic constitutive model. J Mech Phys Solids. 2024;185(12):105570. doi:10.1016/j.jmps.2024.105570. [Google Scholar] [CrossRef]

73. Bai J, Jeong H, Batuwatta-Gamage CP, Xiao S, Wang Q, Rathnayaka CM, et al. An introduction to programming physics-informed neural network-based computational solid mechanics. Int J Comput Methods. 2023;20(10):2350013. doi:10.1142/s0219876223500135. [Google Scholar] [CrossRef]

74. Bai J, Rabczuk T, Gupta A, Alzubaidi L, Gu Y. A physics-informed neural network technique based on a modified loss function for computational 2D and 3D solid mechanics. Comput Mech. 2023;71(3):543–62. doi:10.1007/s00466-022-02252-0. [Google Scholar] [CrossRef]

75. Wang J, Mo YL, Izzuddin B, Kim CW. Exact Dirichlet boundary Physics-informed Neural Network EPINN for solid mechanics. Comput Meth Appl Mech Eng. 2023;414:116184. doi:10.1016/j.cma.2023.116184. [Google Scholar] [CrossRef]

76. Singh V, Harursampath D, Dhawan S, Sahni M, Saxena S, Mallick R. Physics-informed neural network for solving a one-dimensional solid mechanics problem. Modelling. 2024;5(4):1532–49. doi:10.3390/modelling5040080. [Google Scholar] [CrossRef]

77. Deckert F, Lippold L, Most T, Könke C. Exploring the potential of physics-informed neural networks for the structural analysis of 2D frame structures. Appl Mech. 2025;6(4):84. doi:10.3390/applmech6040084. [Google Scholar] [CrossRef]

78. Peng LX, Sun JK, Tao YP, Huang ZM. Bending analysis of thin plates with variable stiffness resting on elastic foundation via a two-network strategy physics-informed neural network method. Structures. 2024;68(2):107051. doi:10.1016/j.istruc.2024.107051. [Google Scholar] [CrossRef]

79. Wang W, Thai HT. A physics-informed neural network framework for laminated composite plates under bending. Thin Walled Struct. 2025;210:113014. doi:10.1016/j.tws.2025.113014. [Google Scholar] [CrossRef]

80. Diao Y, Yang J, Zhang Y, Zhang D, Du Y. Solving multi-material problems in solid mechanics using physics-informed neural networks based on domain decomposition technology. Comput Meth Appl Mech Eng. 2023;413(8):116120. doi:10.1016/j.cma.2023.116120. [Google Scholar] [CrossRef]

81. Eshaghi MS, Bamdad M, Anitescu C, Wang Y, Zhuang X, Rabczuk T. Applications of scientific machine learning for the analysis of functionally graded porous beams. Neurocomputing. 2025;619(6):129119. doi:10.1016/j.neucom.2024.129119. [Google Scholar] [CrossRef]

82. Nopour R, Fallah A, Aghdam MM. Large deflection analysis of functionally graded reinforced sandwich beams with auxetic core using physics-informed neural network. Mech Based Des Struct Mach. 2025;53(7):5264–88. doi:10.1080/15397734.2025.2462674. [Google Scholar] [CrossRef]

83. Guo L, Zhao S, Yang J, Kitipornchai S. Input-optimized physics-informed neural networks for wave propagation problems in laminated structures. Eng Appl Artif Intell. 2025;141:109755. doi:10.1016/j.engappai.2024.109755. [Google Scholar] [CrossRef]

84. Zhou Q, Enos RS, Zhou K, Sun H, Zhang D, Tang J. Analysis of microstructure uncertainty propagation in fibrous composites Empowered by Physics-Informed, semi-supervised machine learning. Comput Mater Sci. 2025;246(03):113423. doi:10.1016/j.commatsci.2024.113423. [Google Scholar] [CrossRef]

85. Khadijeh M, Kasbergen C, Erkens S, Varveri A. Exploring the roles of numerical simulations and machine learning in multiscale paving materials analysis: applications, challenges, best practices. Comput Meth Appl Mech Eng. 2025;433(10):117462. doi:10.1016/j.cma.2024.117462. [Google Scholar] [CrossRef]

86. Han S, Ye Q, Mahmoud HA, Elbarbary A. Nonlinear phase velocities in tri-directional functionally graded nanoplates coupled with NEMS patch using multi-physics simulation. Aerosp Sci Technol. 2025;156:109714. doi:10.1016/j.ast.2024.109714. [Google Scholar] [CrossRef]

87. Li S, Nie D, Zhang Y, Li L. Physics-informed neural network-based homogenization for architected lattice structures. Int J Mech Sci. 2025;306(9):110783. doi:10.1016/j.ijmecsci.2025.110783. [Google Scholar] [CrossRef]

88. Kianian O, Sarrami S, Movahedian B, Azhari M. PINN-based forward and inverse bending analysis of nanobeams on a three-parameter nonlinear elastic foundation including hardening and softening effect using nonlocal elasticity theory. Eng Comput. 2025;41(1):71–97. doi:10.1007/s00366-024-01985-1. [Google Scholar] [CrossRef]

89. Go MS, Noh HK, Hyuk Lim J. Real-time full-field inference of displacement and stress from sparse local measurements using physics-informed neural networks. Mech Syst Signal Process. 2025;224:112009. doi:10.1016/j.ymssp.2024.112009. [Google Scholar] [CrossRef]

90. Deng Y, Chen C, Wang Q, Li X, Fan Z, Li Y. Modeling a typical non-uniform deformation of materials using physics-informed deep learning: applications to forward and inverse problems. Appl Sci. 2023;13(7):4539. doi:10.3390/app13074539. [Google Scholar] [CrossRef]

91. Ghaffari Motlagh Y, Fathi F, Brigham JC, Jimack PK. Deep learning for inverse material characterization. Comput Meth Appl Mech Eng. 2025;436(6):117650. doi:10.1016/j.cma.2024.117650. [Google Scholar] [CrossRef]

92. Cha YJ, Ali R, Lewis J, BüyükÖztürk O. Deep learning-based structural health monitoring. Autom Constr. 2024;161(3):105328. doi:10.1016/j.autcon.2024.105328. [Google Scholar] [CrossRef]

93. Kosova F, Altay Ö, Ünver HÖ. Structural health monitoring in aviation: a comprehensive review and future directions for machine learning. Nondestruct Test Eval. 2025;40(1):1–60. doi:10.1080/10589759.2024.2350575. [Google Scholar] [CrossRef]

94. Wang R, Li J, Li L, An S, Ezard B, Li Q, et al. Structural damage identification by using physics-guided residual neural networks. Eng Struct. 2024;318(1778):118703. doi:10.1016/j.engstruct.2024.118703. [Google Scholar] [CrossRef]

95. Chen X, Wang Y, Zeng Q, Ren X, Li Y. A two-step scaled physics-informed neural network for non-destructive testing of hull rib damage. Ocean Eng. 2025;319(7):120260. doi:10.1016/j.oceaneng.2024.120260. [Google Scholar] [CrossRef]

96. Gao X, Zhang Y, Xiang Y, Li P, Liu X. An interface reconstruction method based on the physics-informed neural network: application to ultrasonic array imaging. IEEE Trans Instrum Meas. 2025;74:1–8. doi:10.1109/tim.2024.3522626. [Google Scholar] [CrossRef]

97. Lee S, Popovics JS. Ultrasonic defect detection in a concrete slab assisted by physics-informed neural networks. NDT E Int. 2025;151:103311. doi:10.1016/j.ndteint.2024.103311. [Google Scholar] [CrossRef]

98. Zhang C, Shafieezadeh A. Simulation-free reliability analysis with active learning and Physics-Informed Neural Network. Reliab Eng Syst Saf. 2022;226(2):108716. doi:10.1016/j.ress.2022.108716. [Google Scholar] [CrossRef]

99. Bharadwaja BVSS, Nabian MA, Sharma B, Choudhry S, Alankar A. Physics-informed machine learning and uncertainty quantification for mechanics of heterogeneous materials. Integr Mater Manuf Innov. 2022;11(4):607–27. doi:10.1007/s40192-022-00283-2. [Google Scholar] [CrossRef]

100. Yan CA, Vescovini R, Dozio L. A framework based on physics-informed neural networks and extreme learning for the analysis of composite structures. Comput Struct. 2022;265:106761. doi:10.1016/j.compstruc.2022.106761. [Google Scholar] [CrossRef]

101. Chen CT, Gu GX. Physics-informed deep-learning for elasticity: forward, inverse, and mixed problems. Adv Sci. 2023;10(18):2300439. doi:10.1002/advs.202300439. [Google Scholar] [PubMed] [CrossRef]

102. Kamali A, Sarabian M, Laksari K. Elasticity imaging using physics-informed neural networks: spatial discovery of elastic modulus and Poisson’s ratio. Acta Biomater. 2023;155(1):400–9. doi:10.1016/j.actbio.2022.11.024. [Google Scholar] [PubMed] [CrossRef]

103. Movahhedi M, Liu XY, Geng B, Elemans C, Xue Q, Wang JX, et al. Predicting 3D soft tissue dynamics from 2D imaging using physics informed neural networks. Commun Biol. 2023;6(1):541. doi:10.1038/s42003-023-04914-y. [Google Scholar] [PubMed] [CrossRef]

104. Zhang E, Yin M, Karniadakis GE. Physics-informed neural networks for nonhomogeneous material identification in elasticity imaging. arXiv: 200904525. 2020. [Google Scholar]

105. Wu W, Daneker M, Turner KT, Jolley MA, Lu L. Identifying heterogeneous micromechanical properties of biological tissues via physics-informed neural networks. Small Meth. 2025;9(1):2400620. doi:10.1002/smtd.202400620. [Google Scholar] [PubMed] [CrossRef]

106. Wu W, Daneker M, Jolley MA, Turner KT, Lu L. Effective data sampling strategies and boundary condition constraints of physics-informed neural networks for identifying material properties in solid mechanics. Appl Math Mech Engl Ed. 2023;44(7):1039–68. doi:10.1007/s10483-023-2995-8. [Google Scholar] [PubMed] [CrossRef]

107. Söyleyici C, Ünver HÖ. A Physics-Informed Deep Neural Network based beam vibration framework for simulation and parameter identification. Eng Appl Artif Intell. 2025;141(3):109804. doi:10.1016/j.engappai.2024.109804. [Google Scholar] [CrossRef]

108. Tondo GR, Rau S, Kavrakov I, Morgenthal G. Stochastic stiffness identification and response estimation of Timoshenko beams via physics-informed Gaussian processes. Probab Eng Mech. 2023;74(2):103534. doi:10.1016/j.probengmech.2023.103534. [Google Scholar] [CrossRef]

109. Mai HT, Mai DD, Kang J, Lee J, Lee J. Physics-informed neural energy-force network: a unified solver-free numerical simulation for structural optimization. Eng Comput. 2024;40(1):147–70. doi:10.1007/s00366-022-01760-0. [Google Scholar] [CrossRef]

110. Wu H, Wu YC, Zhi P, Wu X, Zhu T. Structural optimization of single-layer domes using surrogate-based physics-informed neural networks. Heliyon. 2023;9(10):e20867. doi:10.1016/j.heliyon.2023.e20867. [Google Scholar] [PubMed] [CrossRef]

111. Jeong H, Bai J, Batuwatta-Gamage CP, Rathnayaka C, Zhou Y, Gu Y. A Physics-Informed Neural Network-based Topology Optimization (PINNTO) framework for structural optimization. Eng Struct. 2023;278:115484. doi:10.1016/j.engstruct.2022.115484. [Google Scholar] [CrossRef]

112. Jeong H, Batuwatta-Gamage C, Bai J, Xie YM, Rathnayaka C, Zhou Y, et al. A complete Physics-Informed Neural Network-based framework for structural topology optimization. Comput Meth Appl Mech Eng. 2023;417:116401. doi:10.1016/j.cma.2023.116401. [Google Scholar] [CrossRef]

113. Jeong H, Batuwatta-Gamage C, Bai J, Rathnayaka C, Zhou Y, Gu Y. An advanced physics-informed neural network-based framework for nonlinear and complex topology optimization. Eng Struct. 2025;322:119194. doi:10.1016/j.engstruct.2024.119194. [Google Scholar] [CrossRef]

114. Lai S, Feng J, Lv Z, Kim J, Li Y. A dual-energy physics-informed multi-material topology optimization method within the phase-field framework. Comput Meth Appl Mech Eng. 2025;447(1):118338. doi:10.1016/j.cma.2025.118338. [Google Scholar] [CrossRef]

115. Jeong H, Bai J, Batuwatta-Gamage C, Wegert ZJ, Mallon CN, Challis VJ, et al. Fourier feature embedded physics-informed neural network-based topology optimization (FF-PINNTO) framework for geometrically nonlinear structures. Comput Meth Appl Mech Eng. 2025;446(2):118244. doi:10.1016/j.cma.2025.118244. [Google Scholar] [CrossRef]

116. Yin J, Li S, Zhang Y, Wang H. An efficient discrete physics-informed neural networks for geometrically nonlinear topology optimization. Comput Meth Appl Mech Eng. 2025;442:118043. doi:10.1016/j.cma.2025.118043. [Google Scholar] [CrossRef]

117. Zhang Z, Lee JH, Sun L, Gu GX. Weak-formulated physics-informed modeling and optimization for heterogeneous digital materials. PNAS Nexus. 2024;3(5):pgae186. doi:10.1093/pnasnexus/pgae186. [Google Scholar] [PubMed] [CrossRef]

118. Zhao X, Mezzadri F, Wang T, Qian X. Physics-informed neural network based topology optimization through continuous adjoint. Struct Multidiscip Optim. 2024;67(8):143. doi:10.1007/s00158-024-03856-1. [Google Scholar] [CrossRef]

119. Zhu SP, Wang L, Luo C, Correia JAFO, De Jesus AMP, Berto F, et al. Physics-informed machine learning and its structural integrity applications: state of the art. Phil Trans R Soc A. 2023;381(2260):20220406. doi:10.1098/rsta.2022.0406. [Google Scholar] [PubMed] [CrossRef]

120. Bai Z, Song S. Structural reliability analysis based on neural networks with physics-informed training samples. Eng Appl Artif Intell. 2023;126(2):107157. doi:10.1016/j.engappai.2023.107157. [Google Scholar] [CrossRef]

121. Gu Y, Zhang C, Zhang P, Golub MV, Yu B. Enriched physics-informed neural networks for 2D in-plane crack analysis: theory and MATLAB code. Int J Solids Struct. 2023;276(1):112321. doi:10.1016/j.ijsolstr.2023.112321. [Google Scholar] [CrossRef]

122. Ning L, Cai Z, Dong H, Liu Y, Wang W. Physics-informed neural network frameworks for crack simulation based on minimized peridynamic potential energy. Comput Meth Appl Mech Eng. 2023;417(11):116430. doi:10.1016/j.cma.2023.116430. [Google Scholar] [CrossRef]

123. Yu XL, Zhou XP. A nonlocal energy-informed neural network for isotropic elastic solids with cracks under thermomechanical loads. Numer Meth Eng. 2023;124(18):3935–63. doi:10.1002/nme.7296. [Google Scholar] [CrossRef]

124. Yu XL, Zhou XP. A nonlocal energy-informed neural network based on peridynamics for elastic solids with discontinuities. Comput Mech. 2024;73(2):233–55. doi:10.1007/s00466-023-02365-0. [Google Scholar] [CrossRef]

125. Guo X, Song Z. Physics-informed neural networks for linear elastic fracture mechanics: application of assessing stress intensity factor and transverse stress. Eng Appl Artif Intell. 2025;159(7914):111699. doi:10.1016/j.engappai.2025.111699. [Google Scholar] [CrossRef]

126. Salvati E, Tognan A, Laurenti L, Pelegatti M, De Bona F. A defect-based physics-informed machine learning framework for fatigue finite life prediction in additive manufacturing. Mater Des. 2022;222(6):111089. doi:10.1016/j.matdes.2022.111089. [Google Scholar] [CrossRef]

127. Zhou W, Xu YF. Damage identification for plate structures using physics-informed neural networks. Mech Syst Signal Process. 2024;209(1):111111. doi:10.1016/j.ymssp.2024.111111. [Google Scholar] [CrossRef]

128. Liu H, Ding X, Liu J, Zhang Y, Zhang B, Li E, et al. A Physics-Informed Neural Network model for combined high and low cycle fatigue life prediction. Mech Mater. 2025;209:105429. doi:10.1016/j.mechmat.2025.105429. [Google Scholar] [CrossRef]

129. Zhai Q, Liu Z, Zhu P. A transfer learning enhanced physics-informed neural network for predicting fatigue life of megacasting alloy with limited data sizes. Int J Fatigue. 2025;200(5):109129. doi:10.1016/j.ijfatigue.2025.109129. [Google Scholar] [CrossRef]

130. Lu N, Liu X, Wang K, Wang H, Zeng W, Luo Y. Fatigue life prediction of weld joints in orthotropic steel bridge decks based on high-fidelity adaptive physics-informed neural networks. Eng Fail Anal. 2025;182(7):110024. doi:10.1016/j.engfailanal.2025.110024. [Google Scholar] [CrossRef]

131. Dong Y, Yang X, Chang D, Li Q. Predicting fatigue life of multi-defect materials using the fracture mechanics-based physics-informed neural network framework. Int J Fatigue. 2025;190:108626. doi:10.1016/j.ijfatigue.2024.108626. [Google Scholar] [CrossRef]

132. Akbari E, Chakherlou TN, Tabrizchi H, Mosavi A. Physics-informed neural networks for multiaxial fatigue life prediction of aluminum alloy. Comput Model Eng Sci. 2025;145(1):305–25. doi:10.32604/cmes.2025.068581. [Google Scholar] [CrossRef]

133. Zhang XC, Gong JG, Xuan FZ. A physics-informed neural network for creep-fatigue life prediction of components at elevated temperatures. Eng Fract Mech. 2021;258(5):108130. doi:10.1016/j.engfracmech.2021.108130. [Google Scholar] [CrossRef]

134. Chen D, Li Y, Liu K, Li Y. A physics-informed neural network approach to fatigue life prediction using small quantity of samples. Int J Fatigue. 2023;166(14):107270. doi:10.1016/j.ijfatigue.2022.107270. [Google Scholar] [CrossRef]

135. Abiria I, Wang C, Zhang Q, Liu C, Jin X. High-cycle and very-high-cycle fatigue life prediction in additive manufacturing using hybrid physics-informed neural networks. Eng Fract Mech. 2025;319:111026. doi:10.1016/j.engfracmech.2025.111026. [Google Scholar] [CrossRef]

136. Dang L, He X, Tang D, Xin H, Wu B. A fatigue life prediction framework of laser-directed energy deposition Ti-6Al-4V based on physics-informed neural network. Int J Struct Integr. 2025;16(2):327–54. doi:10.1108/ijsi-10-2024-0170. [Google Scholar] [CrossRef]

137. Feng F, Zhu T, Yang B, Zhang Z, Zhou S, Xiao S. Probabilistic fatigue life prediction in additive manufacturing materials with a physics-informed neural network framework. Expert Syst Appl. 2025;275:127098. doi:10.1016/j.eswa.2025.127098. [Google Scholar] [CrossRef]

138. Wang L, Zhu SP, Wu B, Xu Z, Luo C, Wang Q. Multi-fidelity physics-informed machine learning framework for fatigue life prediction of additive manufactured materials. Comput Meth Appl Mech Eng. 2025;439(2021):117924. doi:10.1016/j.cma.2025.117924. [Google Scholar] [CrossRef]

139. Faegh M, Ghungrad S, Oliveira JP, Rao P, Haghighi A. A review on physics-informed machine learning for process-structure-property modeling in additive manufacturing. J Manuf Process. 2025;133:524–55. doi:10.1016/j.jmapro.2024.11.066. [Google Scholar] [CrossRef]

140. Smoqi Z, Gaikwad A, Bevans B, Kobir MH, Craig J, Abul-Haj A, et al. Monitoring and prediction of porosity in laser powder bed fusion using physics-informed meltpool signatures and machine learning. J Mater Process Technol. 2022;304:117550. doi:10.1016/j.jmatprotec.2022.117550. [Google Scholar] [CrossRef]

141. Skiadopoulos M, Kifer D, Shokouhi P. A transfer learning approach to the prediction of porosity in additively manufactured metallic components. NDT E Int. 2026;157(1–4):103531. doi:10.1016/j.ndteint.2025.103531. [Google Scholar] [CrossRef]

142. Faegh M, Ghungrad S, Haghighi A. A physics-informed neural network framework for decomposition and path planning in multi-laser additive manufacturing. Manuf Lett. 2025;44:1129–38. doi:10.1016/j.mfglet.2025.06.133. [Google Scholar] [CrossRef]

143. Meng X, Bachmann M, Yang F, Rethmeier M. Toward prediction and insight of porosity formation in laser welding: a physics-informed deep learning framework. Acta Mater. 2025;286(2):120740. doi:10.1016/j.actamat.2025.120740. [Google Scholar] [CrossRef]

144. Farrag A, Yang Y, Cao N, Won D, Jin Y. Physics-informed machine learning for metal additive manufacturing. Prog Addit Manuf. 2025;10(1):171–85. doi:10.1007/s40964-024-00612-1. [Google Scholar] [CrossRef]

145. Zhao W, Pang Z, Wang C, He W, Liang X, Song J, et al. Hybrid ANN-physical model for predicting residual stress and microhardness of metallic materials after laser shock peening. Opt Laser Technol. 2025;181:111750. doi:10.1016/j.optlastec.2024.111750. [Google Scholar] [CrossRef]

146. Zhang J, Ding C. Simple yet effective adaptive activation functions for physics-informed neural networks. Comput Phys Commun. 2025;307(7553):109428. doi:10.1016/j.cpc.2024.109428. [Google Scholar] [CrossRef]

147. Cao G, Wang X. Parametric extended physics-informed neural networks for solid mechanics with complex mixed boundary conditions. J Mech Phys Solids. 2025;194:105944. doi:10.1016/j.jmps.2024.105944. [Google Scholar] [CrossRef]

148. Kapoor T, Wang H, Núñez A, Dollevoet R. Transfer learning for improved generalizability in causal physics-informed neural networks for beam simulations. Eng Appl Artif Intell. 2024;133(1497):108085. doi:10.1016/j.engappai.2024.108085. [Google Scholar] [CrossRef]

149. Badia S, Li W, Martín AF. Compatible finite element interpolated neural networks. Comput Meth Appl Mech Eng. 2025;439(1):117889. doi:10.1016/j.cma.2025.117889. [Google Scholar] [CrossRef]

150. Xiong W, Long X, Bordas SPA, Jiang C. The deep finite element method: a deep learning framework integrating the physics-informed neural networks with the finite element method. Comput Meth Appl Mech Eng. 2025;436(6):117681. doi:10.1016/j.cma.2024.117681. [Google Scholar] [CrossRef]

151. Nath D, Ankit, Neog DR, Gautam SS. Application of machine learning and deep learning in finite element analysis: a comprehensive review. Arch Comput Meth Eng. 2024;31(5):2945–84. doi:10.1007/s11831-024-10063-0. [Google Scholar] [CrossRef]

152. Li H, Miao Y, Khodaei ZS, Aliabadi MH. Finite-PINN: a physics-informed neural network with finite geometric encoding for solid mechanics. J Mech Phys Solids. 2025;203(23):106222. doi:10.1016/j.jmps.2025.106222. [Google Scholar] [CrossRef]

153. Nguyen-Thanh VM, Anitescu C, Alajlan N, Rabczuk T, Zhuang X. Parametric deep energy approach for elasticity accounting for strain gradient effects. Comput Meth Appl Mech Eng. 2021;386(2):114096. doi:10.1016/j.cma.2021.114096. [Google Scholar] [CrossRef]

154. Chandrasekhar A, Suresh K. TOuNN: topology optimization using neural networks. Struct Multidiscip Optim. 2021;63(3):1135–49. doi:10.1007/s00158-020-02748-4. [Google Scholar] [CrossRef]

155. He J, Chadha C, Kushwaha S, Koric S, Abueidda D, Jasiuk I. Deep energy method in topology optimization applications. Acta Mech. 2023;234(4):1365–79. doi:10.1007/s00707-022-03449-3. [Google Scholar] [CrossRef]


Cite This Article

APA Style
Thawon, I., Vo, D., Bui, T.Q., Rattanamongkhonkun, K., Chamroon, C. et al. (2026). Physics-Informed Neural Networks: Current Progress and Challenges in Computational Solid and Structural Mechanics. Computer Modeling in Engineering & Sciences, 146(2), 2. https://doi.org/10.32604/cmes.2026.077044
Vancouver Style
Thawon I, Vo D, Bui TQ, Rattanamongkhonkun K, Chamroon C, Tippayawong N, et al. Physics-Informed Neural Networks: Current Progress and Challenges in Computational Solid and Structural Mechanics. Comput Model Eng Sci. 2026;146(2):2. https://doi.org/10.32604/cmes.2026.077044
IEEE Style
I. Thawon et al., “Physics-Informed Neural Networks: Current Progress and Challenges in Computational Solid and Structural Mechanics,” Comput. Model. Eng. Sci., vol. 146, no. 2, pp. 2, 2026. https://doi.org/10.32604/cmes.2026.077044


cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 413

    View

  • 106

    Download

  • 0

    Like

Share Link