Generating Artistic Images Through Neural Style Transfer Using Machine Learning and Image Manipulation Techniques
Author 1 :Lajapathi Durai R A, Post Graduate Student, Embedded Systems ,
Hindusthan College of Engineering and Technology,Coimbatore,
Co Author 2 :Mr. Joshua Daniel M.E., Assistant Professor, Department of Electrical and Electronics Engineering, Hindusthan College of Engineering and Technology, Coimbatore,
Co Author 3 : Dr. K Mathan M.E. Ph.D., Professor, Department of Electrical and Electronics Engineering, Hindusthan College of Engineering and Technology, Coimbatore.
ABSTRACT:
Neural Style Transfer (NST) is a computational technique that merges the content of one image with the stylistic characteristics of another, creating visually appealing compositions. Leveraging deep learning and image processing, this method separates andcombines the content and style of input images using convolutional neural networks (CNNs). The content representation of an image is captured by the high-level features extracted from a pre-trained CNN, while its style is characterized by the statistical correlations of features at multiple network layers. It explores the fusion of machine learning and image processing techniques in NST. Initially introduced by Gatys et al., NST has gained popularity due to its ability to generate artistic renditions and novel visualizations. The process involves extracting content and style features from separate images and optimizing a generated image to minimize differences in content and style representations. In this work, explored into the underlying concepts and technical aspectsof NST. This proceedings discussed the utilization of pre-trained CNN models like VGG19 or ResNet for feature extraction and how optimization algorithms, such as gradient descent, iteratively refine the generated image to achieve a fusion of content and style. Additionally, This development has been examined the impact of hyper parameters and layer choices on the quality of stylized outputs.Mainly this technology focuses on reduction of total variation loss, which we incur during the style transfer.These proceedings has carried on work with VGG19, XCEPTION, EfficientnetB7. These findings show how Convolutional Neural Networks can learn deep image representations and show how they may be used for sophisticated image synthesis and modification.