Systematic Review of Advancing Machine Learning Through Cross-Domain Analysis of Unlabeled Data

Systematic Review of Advancing Machine Learning Through Cross-Domain Analysis of Unlabeled Data

Authors

  • Yue Zhu Georgia Institute of Technology, GA USA
  • Johnathan Crowell Independent Researcher, USA

DOI:

https://doi.org/10.55662/JST.2023.4104

Downloads

Keywords:

Self-Supervised Learning, Pretext Tasks, Representation Learning, Contrastive Learning, Generative Models, Masked Language Modeling, Transfer Learning, Domain Adaptation, Multi-Modal Learning, Data Efficiency

Abstract

Self-supervised learning (SSL) has become a transformative approach in the field of machine learning, offering a powerful means to harness the vast amounts of unlabeled data available across various domains. By creating auxiliary tasks that generate supervisory signals directly from the data, SSL mitigates the dependency on large, labeled datasets, thereby expanding the applicability of machine learning models. This paper provides a comprehensive exploration of SSL techniques applied to diverse data types, including images, text, audio, and time-series data. We delve into the underlying principles that drive SSL, examine common methodologies, and highlight specific algorithms tailored to each data type. Additionally, we address the unique challenges encountered in applying SSL across different domains and propose future research directions that could further enhance the capabilities and effectiveness of SSL. Through this analysis, we underscore SSL's potential to significantly advance the development of robust, generalizable models capable of tackling complex real-world problems.

Downloads

Download data is not yet available.

References

van den Oord, A., Li, Y., & Vinyals, O. (2018). Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.

Jaiswal, A., Babu, A. R., Zadeh, M. Z., Banerjee, D., & Makedon, F. (2020). A survey on contrastive self-supervised learning. Technologies, 9(1), 2. https://doi.org/10.3390/technologies9010002 DOI: https://doi.org/10.3390/technologies9010002

Jin, W., Liu, L., Cai, Y., Zhou, J., Feng, X., Liu, J., ... & Tang, J. (2020). Self-supervised learning on graphs: Deep insights and new direction. arXiv preprint arXiv:2006.10141.

Krishnan, R., Rajpurkar, P., & Topol, E. J. (2022). Self-supervised learning in medicine and healthcare. Nature Biomedical Engineering, 6(12), 1346-1352. https://doi.org/10.1038/s41551-022-00911-4 DOI: https://doi.org/10.1038/s41551-022-00914-1

Wang, Y., Albrecht, C. M., Braham, N. A. A., Mou, L., & Zhu, X. X. (2022). Self-supervised learning in remote sensing: A review. IEEE Geoscience and Remote Sensing Magazine, 10(4), 213-247. https://doi.org/10.1109/MGRS.2022.3182134 DOI: https://doi.org/10.1109/MGRS.2022.3198244

Tung, H.-Y., Tung, H.-W., Yumer, E., & Fragkiadaki, K. (2017). Self-supervised learning of motion capture. Advances in Neural Information Processing Systems, 30, 5234-5245.

Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. arXiv preprint arXiv:1810.04805.

Schneider, S., Baevski, A., Collobert, R., & Auli, M. (2019). wav2vec: Unsupervised pre-training for speech recognition. arXiv preprint arXiv:1904.05862. DOI: https://doi.org/10.21437/Interspeech.2019-1873

Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020). A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning (pp. 1597-1607). PMLR.

Chen, X., & He, K. (2021). Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 15750-15758). DOI: https://doi.org/10.1109/CVPR46437.2021.01549

He, K., Fan, H., Wu, Y., Xie, S., & Girshick, R. (2020). Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9729-9738). DOI: https://doi.org/10.1109/CVPR42600.2020.00975

Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P. H., Buchatskaya, E., ... & Munos, R. (2020). Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems, 33, 21271-21284.

Devlin, J., Chang, M.-W., Lee, K., & Toutanova, L. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the NAACL-HLT (Vol. 1, p. 2). https://doi.org/10.18653/v1/N19-1423 DOI: https://doi.org/10.18653/v1/N19-1423

Baevski, A., Zhou, Y., Mohamed, A., & Auli, M. (2020). wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Processing Systems, 33, 12449-12460.

Liu, A. T., Li, S.-W., & Lee, H.-Y. (2021). TERA: Self-supervised learning of transformer encoder representation for speech. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29, 2351-2366. https://doi.org/10.1109/TASLP.2021.3090991 DOI: https://doi.org/10.1109/TASLP.2021.3095662

Baevski, A., Auli, M., & Mohamed, A. (2019). Effectiveness of self-supervised pre-training for speech recognition. arXiv preprint arXiv:1911.03912. DOI: https://doi.org/10.1109/ICASSP40776.2020.9054224

Mohamed, A., Hsu, W.-N., Xiong, W., Pino, J., Wang, Y., Song, Z., ... & Auli, M. (2022). Self-supervised speech representation learning: A review. IEEE Journal of Selected Topics in Signal Processing, 16(6), 1179-1210. https://doi.org/10.1109/JSTSP.2022.3210287 DOI: https://doi.org/10.1109/JSTSP.2022.3207050

Eldele, E., Ragab, M., Chen, Z., Wu, M., Koaik, R., Dong, L., ... & Gao, W. (2021). Time-series representation learning via temporal and contextual contrasting. arXiv preprint arXiv:2106.14112. DOI: https://doi.org/10.24963/ijcai.2021/324

Franceschi, J.-Y., Dieuleveut, A., & Jaggi, M. (2019). Unsupervised scalable representation learning for multivariate time series. Advances in Neural Information Processing Systems, 32, 84-94.

Bhattacharjee, A., Karami, M., & Liu, H. (2022). Text transformations in contrastive self-supervised learning: A review. arXiv preprint arXiv:2203.12000. DOI: https://doi.org/10.24963/ijcai.2022/757

Noroozi, M., Vinjimoor, A., Favaro, P., & Pirsiavash, H. (2018). Boosting self-supervised learning via knowledge transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 9359-9367). DOI: https://doi.org/10.1109/CVPR.2018.00975

Kumar, P., Rawat, P., & Chauhan, S. (2022). Contrastive self-supervised learning: Review, progress, challenges and future research directions. International Journal of Multimedia Information Retrieval, 11(4), 461-488. https://doi.org/10.1007/s13735-022-00231-8 DOI: https://doi.org/10.1007/s13735-022-00245-6

Baevski, A., Hsu, W.-N., Xu, Q., Babu, A., Gu, J., & Auli, M. (2022). Data2vec: A general framework for self-supervised learning in speech, vision, and language. In International Conference on Machine Learning (pp. 1298-1312). PMLR.

Reyes-Ortiz, J., Anguita, D., Ghio, A., Oneto, L., & Parra, X. (2012). Human activity recognition using smartphones. UCI Machine Learning Repository. https://doi.org/10.24432/C54S4K

Downloads

Published

20-01-2023
Citation Metrics
DOI: 10.55662/JST.2023.4104
Published: 20-01-2023

How to Cite

Zhu, Y., and J. Crowell. “Systematic Review of Advancing Machine Learning Through Cross-Domain Analysis of Unlabeled Data”. Journal of Science & Technology, vol. 4, no. 1, Jan. 2023, pp. 136-55, doi:10.55662/JST.2023.4104.
PlumX Metrics

Plaudit

License Terms

Ownership and Licensing:

Authors of this research paper submitted to the Journal of Science & Technology retain the copyright of their work while granting the journal certain rights. Authors maintain ownership of the copyright and have granted the journal a right of first publication. Simultaneously, authors agreed to license their research papers under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) License.

License Permissions:

Under the CC BY-NC-SA 4.0 License, others are permitted to share and adapt the work, as long as proper attribution is given to the authors and acknowledgement is made of the initial publication in the Journal of Science & Technology. This license allows for the broad dissemination and utilization of research papers.

Additional Distribution Arrangements:

Authors are free to enter into separate contractual arrangements for the non-exclusive distribution of the journal's published version of the work. This may include posting the work to institutional repositories, publishing it in journals or books, or other forms of dissemination. In such cases, authors are requested to acknowledge the initial publication of the work in the Journal of Science & Technology.

Online Posting:

Authors are encouraged to share their work online, including in institutional repositories, disciplinary repositories, or on their personal websites. This permission applies both prior to and during the submission process to the Journal of Science & Technology. Online sharing enhances the visibility and accessibility of the research papers.

Responsibility and Liability:

Authors are responsible for ensuring that their research papers do not infringe upon the copyright, privacy, or other rights of any third party. The Journal of Science & Technology and The Science Brigade Publishers disclaim any liability or responsibility for any copyright infringement or violation of third-party rights in the research papers.

Loading...