Variational Autoencoders (VAEs)

By: Vajratiya Vajrobol, International Center for AI and Cyber Security Research and Innovations (CCRI), Asia University, Taiwan, vvajratiya@gmail.com

Variational Autoencoders (VAEs) belong to the group of autoencoders and are considered as generative models in the field of artificial intelligence. VAEs, in contrast to conventional autoencoders, utilize probabilistic methods to acquire knowledge about the probabilistic distribution of data inside a lower-dimensional latent space. The stochastic nature of VAEs distinguishes them, enabling them to excel at producing a wide range of realistic data samples.

The functioning of Variational Autoencoders (VAEs) and their distinctive contribution

Variational Autoencoders (VAEs) include of two primary elements: an encoder and a decoder. The encoder transforms the input data into a probability distribution inside the latent space, incorporating randomness by specifying the parameters of this distribution [1]. Subsequently, the decoder selects data points from this distribution to produce novel data. By including probabilistic encoding and decoding, VAEs are able to effectively capture the intrinsic diversity and complexity of the data distribution [2]. This makes them highly effective tools for generative tasks [3].

Advantages of Variational Autoencoders (VAEs)

An inherent benefit of VAEs is in their capacity to produce novel and significant samples through the exploration of the acquired latent space [4]. This attribute renders them essential in situations that necessitate the creation of varied and authentic data. VAEs also play a role in tasks such as data imputation [5], which involves reconstructing missing or damaged data, and anomaly detection, which involves identifying departures from learnt patterns [6].

Illustrative Use Case: Image Generation and Synthesis

VAEs are commonly used for picture creation and synthesis. VAEs have the ability to create new and varied pictures that possess similar traits to the training data by acquiring a probabilistic representation of a dataset. This technique is extensively employed in creative endeavors, such as developing artistic visuals [7], producing novel facial features, or even converting photos from one style to another. The acquired latent space enables seamless transition between various genres, offering a potent instrument for artistic expression and content generation.

Expanding the Scope of VAEs: Various Applications Beyond Image Generation

Although picture creation is a well-known use case, Variational Autoencoders (VAEs) have practical applications in several fields. These algorithms are utilized in anomaly detection to find atypical patterns [6], in drug development to generate chemical structures [8], and in voice synthesis [9] to create authentic voice samples. VAEs are valuable in tasks that include combining data, removing noise, and representing the inherent unpredictability in intricate datasets [10]. This demonstrates their adaptability and significant role in furthering generative modeling and probabilistic learning in artificial intelligence.

References :

  1. Doersch, C. (2016). Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908.
  2. Simidjievski, N., Bodnar, C., Tariq, I., Scherer, P., Andres Terre, H., Shams, Z., … & Liò, P. (2019). Variational autoencoders for cancer data integration: design principles and computational practice. Frontiers in genetics, 10, 1205.
  3. Khan, S. H., Hayat, M., & Barnes, N. (2018, March). Adversarial training of variational auto-encoders for high fidelity image generation. In 2018 IEEE winter conference on applications of computer vision (WACV) (pp. 1312-1320). IEEE.
  4. Cristovao, P., Nakada, H., Tanimura, Y., & Asoh, H. (2020). Generating in-between images through learned latent space representation using variational autoencoders. IEEE Access, 8, 149456-149467.
  5. Fortuin, V., Baranchuk, D., Rätsch, G., & Mandt, S. (2020, June). Gp-vae: Deep probabilistic time series imputation. In International conference on artificial intelligence and statistics (pp. 1651-1661). PMLR.
  6. Zhou, Y., Liang, X., Zhang, W., Zhang, L., & Song, X. (2021). VAE-based deep SVDD for anomaly detection. Neurocomputing, 453, 131-140.
  7. Huang, H., He, R., Sun, Z., & Tan, T. (2018). Introvae: Introspective variational autoencoders for photographic image synthesis. Advances in neural information processing systems, 31.
  8. Ochiai, T., Inukai, T., Akiyama, M., Furui, K., Ohue, M., Matsumori, N., … & Sakakibara, Y. (2023). Variational autoencoder-based chemical latent space for large molecular structures with 3D complexity. Communications Chemistry, 6(1), 249.
  9. Akuzawa, K., Iwasawa, Y., & Matsuo, Y. (2018). Expressive speech synthesis via modeling expressions with variational autoencoder. arXiv preprint arXiv:1804.02135.
  10. Lee, M., Sohn, S. S., Moon, S., Yoon, S., Kapadia, M., & Pavlovic, V. (2022). Muse-VAE: multi-scale VAE for environment-aware long term trajectory prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2221-2230).
  11. Poonia, V., Goyal, M. K., Gupta, B. B., Gupta, A. K., Jha, S., & Das, J. (2021). Drought occurrence in different river basins of India and blockchain technology based framework for disaster management. Journal of Cleaner Production312, 127737.
  12. Gupta, B. B., & Sheng, Q. Z. (Eds.). (2019). Machine learning for computer and cyber security: principle, algorithms, and practices. CRC Press.
  13. Singh, A., & Gupta, B. B. (2022). Distributed denial-of-service (DDoS) attacks and defense mechanisms in various web-enabled computing platforms: issues, challenges, and future research directions. International Journal on Semantic Web and Information Systems (IJSWIS)18(1), 1-43.

Cite As:

Vajrobol V. (2024) Variational Autoencoders (VAEs), Insights2Techinfo, pp.1

66700cookie-checkVariational Autoencoders (VAEs)
Share this:

Leave a Reply

Your email address will not be published.