33
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Effortless and beneficial processing of natural languages using transformers

&

References

  • T. M. Mitchell and T. M. Mitchell. Machine learning. Vol. 1. No. 9. New York: McGraw-hill, 1997.
  • T. G. Dietterich, ‘Machine learning for sequential data: A review’, Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 2396, pp. 15–30, 2002, doi: 10.1007/3-540-70659-3_2, 2002.
  • F.C. Godoi, S. Prakash, and B. R. Bhandari, “Final report Final report.” Rev. 3D Print. potential red meat Appl 23, 1-61, 2021.
  • K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, Y. Bengio, “Learning phrase representations using RNN encoder-decoder for statistical machine translation.” arXiv preprint arXiv:1406.1078,2014.
  • V. Vismaya and D. Reynald. “Natural language processing using python.” Int J Sci Eng Res 8.5, 19-24, 2017.
  • E.D. Liddy, “Natural language processing.”, 2001.
  • K. Yao, G. Zweig, M.Y. Hwang, Y. Shi, D. Yu, “Recurrent neural networks for language understanding”, In Interspeech pp. 2524-2528,2013.
  • R. Pascanu, T, Mikolov, Y Bengio,”On the difficulty of training recurrent neural networks”, In International conference on machine learning, pp. 1310-1318, PMLR, 2013.
  • T. Wang, and K. Cho. “Larger-context language modelling.” arXiv preprint arXiv:1511.03729,2015.
  • C.O. Alm, D. Roth, R. Sproat. “Emotions from text: machine learning for text-based emotion prediction”, Proceedings of human language technology conference and conference on empirical methods in natural language processing, 2005.
  • A. Graves, “Sequence transduction with recurrent neural networks.” arXiv preprint arXiv:1211.3711,2012.
  • Karpathy, Andrej. “The unreasonable effectiveness of recurrent neural networks.” Andrej Karpathy blog 21, [Online]. Available: http://karpathy.github.io/2015/05/21/rnn-effectiveness/. [Accessed: 21-Jan-2021] (2015)
  • N. Bansal, A. Sharma, R.K. Singh, “Recurrent neural network for abstractive summarization of documents”, Journal of Discrete Mathematical Sciences and Cryptography, 23(1), 65-72, 2020. doi: 10.1080/09720529.2020.1721873
  • Vosoughi, Soroush, Prashanth Vijayaraghavan, and Deb Roy. “Tweet2vec: Learning tweet embeddings using character-level cnn-lstm encoder-decoder.” Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. (2016).
  • T. Wang, P. Chen, P. K. Amaral, J. Qiang, “An experimental study of LSTM encoder-decoder model for text simplification”, arXiv preprint arXiv:1609.03663, 2016.
  • P. Malhotra, A. Ramakrishnan, G. Anand, L. Vig, P. Agarwal, G. Shroff, “LSTM-based encoder-decoder for multi-sensor anomaly detection”, arXiv preprint arXiv:1607.00148,2016
  • X. Li, C. Mao, S. Huang, Z. Ye, Z. “Chinese sign language recognition based on shs descriptor and encoder-decoder lstm model”, In Chinese Conference on Biometric Recognition (pp. 719-728). Springer, Cham, 2017.
  • K. Kumar, P. Nishanth, M. Singh, and S. Dahiya, “Image Encoder and Sentence Decoder Based Video Event Description Generating Model: A Storytelling”, IETE J. Educ., pp. 1–7, 2022, doi: 10.1080/09747338.2022.2044396.2022.
  • K. Xu, J.L. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R.S. Zemel, Y. Bengio, “Show, attend and tell: Neural image caption generation with visual attention.” International conference on machine learning. PMLR, 2015.
  • M.T. Luong, H. Pham, C.D. Manning. “Effective approaches to attention-based neural machine translation.” arXiv preprint arXiv:1508.04025, 2015.
  • P. I. Vaswani A. Shazeer N. Parmar N. Uszkoreit J. Jones L. Gomez AN, Kaiser Ł, “Attention Is All You Need”, 31st Conf. Neural Inf. Process. Syst. (NIPS 2017), Long Beach, CA, USA, pp. 5998–6008, 2017, doi: 10.1109/2943.974352,2017.
  • Z. Ji, K. Xiong, Y. Pang, X. Li, “Video Summarization with Attention-Based Encoder-Decoder Networks”, IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 6, pp. 1709–1717,doi: 10.1109/TCSVT.2019.2904996,2020.
  • C. C. Chiu, T.N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R.J. Weiss, K. Rao, E. Gonina, N. Jaitly, B. Li, J. Chorowsky, M. Bacchiani, “State-of-the-Art Speech Recognition with Sequence-to-Sequence Models”, ICASSP, IEEE Int. Conf. Acoust. Speech Signal Process. - Proc., vol. 2018-April, pp. 4774–4778, 2018, doi: 10.1109/ICASSP.2018.8462105,2018.
  • L. Kang, J. I. Toledo, P. Riba, M. Villegas, A. Fornés, and M. Rusiñol, “Convolve, Attend and Spell: An Attention-based Sequence-to-Sequence Model for Handwritten Word Recognition”, German Conference on Pattern Recognition. Springer, Cham, 2018
  • T. Hori, H. Wang, C. Hori, S. Watanabe, B. Harsham, J. Le Roux, J.R. Hershey, Y Koji, Y. Jing, Z. Jhu, T. Aikawa, “Dialog state tracking with attention-based sequence-to-sequence learning”, In 2016 IEEE Spoken Language Technology Workshop (SLT) (pp. 552-558). IEEE, 2016.
  • K. B. Nelatoori, H. B. Kommanti, ‘Attention-Based Bi-LSTM Network for Abusive Language Detection’, doi: 10.1080/03772063.2022.2034534, 2022.
  • T. Klein, M. Nabi. “Attention is (not) all you need for commonsense reasoning.” arXiv preprint arXiv:1905.13497, 2019.
  • Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O Levy, “Roberta: A robustly optimized bert pretraining approach.” arXiv preprint arXiv:1907.11692, 2019.
  • ‘Google AI Blog: ALBERT: A Lite BERT for Self-Supervised Learning of Language Representations’. [Online]. Available: https://ai.googleblog.com/2019/12/albert-lite-bert-for-self-supervised.html. [Accessed: 03-Feb-2021].
  • Z. Zhou, V.W.L. Tam, E.Y. Lam, “SignBERT: A BERT-Based Deep Learning Framework for Continuous Sign Language Recognition”, IEEE Access, 9, 161669-161682, 2021.
  • Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, R. Soricut, “Albert: A lite bert for self-supervised learning of language representations”, arXiv preprint arXiv:1909.11942,2017.
  • H. Lee, J. Yoon, B. Hwang, S. Joe, S. Min, Y. Gwon, “KoreALBERT: Pretraining a Lite BERT Model for Korean Language Understanding”, 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021.
  • A.J. Quijano, S. Nguyen, J. Ordonez. “Grid search hyperparameter benchmarking of BERT, ALBERT, and LongFormer on DuoRC.” arXiv preprint arXiv:2101.06326, 2021.
  • H. Ngai, Y. Park, J. Chen, M. Parsapoor, M. Parsa, “Transformer-based models for question answering on COVID19.” arXiv preprint arXiv:2101.11432, 2021.
  • S. Gundapu, M. Radhika, “Transformer based automatic COVID-19 fake news detection system.” arXiv preprint arXiv:2101.00180, 2021.
  • A.S. Nikam, A. G. Ambekar. “Sign language recognition using image based hand gesture recognition techniques.” 2016 online international conference on green engineering and technologies (IC-GET). IEEE, 2016.
  • K. McGuffie, A. Newhouse. “The radicalization risks of GPT-3 and advanced neural language models.” arXiv preprint arXiv:2009.06807,2020.
  • K. Elkins, J Chun. “Can GPT-3 pass a Writer’s turing test?.” Journal of Cultural Analytics 5.2 : 17212.2020.
  • V.K. Verma, M. Pandey, T. Jain, P.K. Tiwari, “Dissecting word embeddings and language models in natural language processing”, Journal of Discrete Mathematical Sciences and Cryptography, 24(5), 1509-1515,2021 doi: 10.1080/09720529.2021.1968108
  • P. Hore, & A. Sharma,”Code-switched end-to-end Marathi speech recognition for especially abled people”, Journal of Discrete Mathematical Sciences and Cryptography, 25(3), 771-784, 2022. doi: 10.1080/09720529.2021.2014134

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.