TY - EJOU AU - Mahmoud, Hanan A. Hosni AU - Hafez, Alaaeldin M. AU - Alabdulkreem, Eatedal TI - Language-Independent Text Tokenization Using Unsupervised Deep Learning T2 - Intelligent Automation \& Soft Computing PY - 2023 VL - 35 IS - 1 SN - 2326-005X AB - Languages–independent text tokenization can aid in classification of languages with few sources. There is a global research effort to generate text classification for any language. Human text classification is a slow procedure. Consequently, the text summary generation of different languages, using machine text classification, has been considered in recent years. There is no research on the machine text classification for many languages such as Czech, Rome, Urdu. This research proposes a cross-language text tokenization model using a Transformer technique. The proposed Transformer employs an encoder that has ten layers with self-attention encoding and a feedforward sublayer. This model improves the efficiency of text classification by providing a draft text classification for a number of documents. We also propose a novel Sub-Word tokenization model with frequent vocabulary usage in the documents. The Sub-Word Byte-Pair Tokenization technique (SBPT) utilizes the sharing of the vocabulary of one sentence with other sentences. The Sub-Word tokenization model enhances the performance of other Sub-Word tokenization models such pair encoding model by +10% using precision metric. KW - Text classification; language-independent tokenization; sub word tokenization DO - 10.32604/iasc.2023.026235