Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (1)
  • Open Access

    ARTICLE

    Hyperparameter Tuning for Deep Neural Networks Based Optimization Algorithm

    D. Vidyabharathi1,*, V. Mohanraj2

    Intelligent Automation & Soft Computing, Vol.36, No.3, pp. 2559-2573, 2023, DOI:10.32604/iasc.2023.032255

    Abstract For training the present Neural Network (NN) models, the standard technique is to utilize decaying Learning Rates (LR). While the majority of these techniques commence with a large LR, they will decay multiple times over time. Decaying has been proved to enhance generalization as well as optimization. Other parameters, such as the network’s size, the number of hidden layers, dropouts to avoid overfitting, batch size, and so on, are solely based on heuristics. This work has proposed Adaptive Teaching Learning Based (ATLB) Heuristic to identify the optimal hyperparameters for diverse networks. Here we consider three architectures Recurrent Neural Networks (RNN),… More >

Displaying 1-10 on page 1 of 1. Per Page