DEEP LEARNING ANALYSIS OF 3D VOLUME OF BRAIN ANOTOMY USING MRI
Mrs. J. Hima Bindu, Geddada Lohita, Mohammed Abdul Sarfaraz, Sindhu Ravula
Abstract: Developing machines that behave in a manner just like humans is the intention of artificial intelligence (AI). Further to sample recognition, making plans, and problem-fixing, laptop sports with synthetic intelligence embody special sports activities. A set of algorithms known as "deep learning" is applied in system learning. With the help of magnetic resonance imaging (MRI), experts use advanced techniques to develop models that can detect and classify brain tumors. This results in the fast and easy identification of such tumors, enabling doctors to provide timely and effective treatment to patients. Thought issues are through and massive as the result of aberrant brain cell proliferation, which could harm the structure of the brain and in the long run, bring about malignant brain maximum cancers. The early identification of brain tumors and the following appropriate remedy may additionally lower the loss of life fee. On this advise generative adversarial community (GAN) architecture for the inexperienced identification of brain tumors through the usage of MRI images. This project discusses numerous methods which include resnet-50, vgg16, and inception v3, and evaluates the proposed structure and those models. To investigate the general performance of the models, we took into consideration unique metrics that incorporate the accuracy, don't forget, loss, and vicinity below the curve (AUC). Due to reading distinct methods with our proposed version and the usage of those metrics, we concluded that the proposed version did better than the others. We may additionally infer that the proposed model is dependable for the early detection of diffusion of thought tumors after evaluating it with the other models. In this project we need to apply extra datasets from 2022. In this architecture will be done by using information collection, Pre-manner with resize and filtering of median, segmentation through thresholding or clustering, and finally using an algorithm with GAN for training and augmentation of the dataset to get more accuracy.