2025 List of Final Year BE/B.Tech/M.tech/MCA Image Processing Projects

Project Code: IP1
Abstract: Although iris recognition has achieved big successes on biometric identification in recent years, difficulties in the collection of iris images with high resolution and in the segmentation of valid regions prevent it from applying to large-scale practical applications. In this paper, we present an eye recognition framework based on deep learning, which relaxes the data collection procedure, improves the anti-fake quality, and promotes the performance of biometric identification. Specifically, we propose and train a mixed convolutional and residual network (MiCoRe-Net) for the eye recognition task. Such an architecture inserts a convolutional layer between every two residual layers and takes the advantages from both of convolutional networks and residual networks. Experiment results show that the proposed approach achieves accuracies of 99.08% and 96.12% on the CASIA-Iris-IntervalV4 and the UBIRIS.v2 datasets, respectively, which outperforms other classical classifiers and deep neural networks with other architectures.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP3
Abstract: The steganography word is derived from the combination of the steganos (στεγανός), which means “covered, hidden or protected”, and the graphein (γράφειν), which means “writing”. The main purpose of steganography studies is to prevent the hidden data from being to obtained by unauthorized persons. In order to realize the basic purpose of the steganography methods, there should be minimal change in the pattern file. In this study, LSB method which is one of the methods of concealing digital image information is examined. In this study, a new method of data hiding is proposed in order to minimize the changes occurring in the cover file while hiding the data with LSB method and to create the most appropriate mask to make it difficult to obtain hidden data.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP4
Abstract: Information hiding aims to embed secret data into the multimedia, such as image, audio, video, and text. In this paper, two new quantum information hiding approaches are put forward. A quantum steganography approach is proposed to hide a quantum secret image into a quantum cover image. The quantum secret image is encrypted first using a controlled-NOT gate to demonstrate the security of the embedded data. The encrypted secret image is embedded into the quantum cover image using the two most and least significant qubits. In addition, a quantum image watermarking approach is presented to hide a quantum watermark gray image into a quantum carrier image. The quantum watermark image, which is scrambled by utilizing Arnolds cat map, is then embedded into the quantum carrier image using the two least and most significant qubits. Only the watermarked image and the key are sufficient to extract the embedded quantum watermark image. The proposed novelty has been illustrated using a scenario of sharing medical imagery between two remote hospitals. The simulation and analysis demonstrate that the two newly proposed approaches have excellent visual quality and high embedding capacity and security

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP5
Abstract: Reversible data hiding in encrypted images (RDHEI) is an effective technique to embed data in the encrypted domain. An original image is encrypted with a secret key and during or after its transmission, it is possible to embed additional information in the encrypted image, without knowing the encryption key or the original content of the image. During the decoding process, the secret message can be extracted and the original image can be reconstructed. In the last few years, RDHEI has started to draw research interest. Indeed, with the development of cloud computing, data privacy has become a real issue. However, none of the existing methods allow us to hide a large amount of information in a reversible manner. In this paper, we propose a new reversible method based on MSB (most significant bit) prediction with a very high capacity. We present two approaches, these are: high capacity reversible data hiding approach with correction of prediction errors and high capacity reversible data hiding approach with embedded prediction errors. With this method, regardless of the approach used, our results are better than those obtained with current state of the art methods, both in terms of reconstructed image quality and embedding capacity.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP6
Abstract: The aim of this paper is to maximize the range of the access control of visual secret sharing (VSS) schemes encrypting multiple images. First, the formulation of access structures for a single secret is generalized to that for multiple secrets. This generalization is maximal in the sense that the generalized formulation makes no restrictions on access structures; in particular, it includes the existing ones as special cases. Next, a sufficient condition to be satisfied by the encryption of VSS schemes realizing an access structure for multiple secrets of the most general form is introduced, and two constructions of VSS schemes with encryption satisfying this condition are provided. Each of the two constructions has its advantage against the other; one is more general and can generate VSS schemes with strictly better contrast and pixel expansion than the other, while the other has a straightforward implementation. Moreover, for threshold access structures, the pixel expansions of VSS schemes generated by the latter construction are estimated and turn out to be the same as those of the existing schemes called the threshold multiple-secret visual cryptographic schemes. Finally, the optimality of the former construction is examined, giving that there exist access structures for which it generates no optimal VSS schemes.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP7
Abstract: With the increase of terrorist threats around the world, human identification research has become a sought after area of research. Unlike standard biometric recognition techniques, gait recognition is a non-intrusive technique. Both data collection and classification processes can be done without a subjects cooperation. In this paper, we propose a new model-based gait recognition technique called postured-based gait recognition. It consists of two elements: posture-based features and posture-based classification. Posture-based features are composed of displacements of all joints between current and adjacent frames and center-of-body (CoB) relative coordinates of all joints, where the coordinates of each joint come from its relative position to four joints: hip-center, hip-left, hip-right, and spine joints, from the front forward. The CoB relative coordinate system is a critical part to handle the different observation angle issue. In posture-based classification, postured-based gait features of all frames are considered. The dominant subject becomes a classification result. The postured-based gait recognition technique outperforms the existing techniques in both fixed direction and freestyle walk scenarios, where turning around and changing directions are involved. This suggests that a set of postures and quick movements are sufficient to identify a person. The proposed technique also performs well under the gallery-size test and the cumulative match characteristic test, which implies that the postured-based gait recognition technique is not gallery-size sensitive and is a good potential tool for forensic and surveillance use.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP8
Abstract: One of the becoming popular biometric modalities is the palmprint. This biometric modality is rich with information, such as minutiae, ridges, wrinkles, and creases. This research team is interested to investigate the creases for biometric identification. The palmprint images in this research have been captured by using a commercially available consumer scanner. For each palmprint image, two square regions on the palmprint image are extracted for biometric identification purpose. One of the regions is from the hyperthenar region, while the another is from the interdigital region. Due to misalignment of the hand, the process of extraction of these regions is tedious and time-consuming. Therefore, in this paper, a computer-aided method has been proposed to simplify the extraction process. The user only needs to mark two points on the palmprint image. Based on these points, the palmprint image will be aligned, and those two regions are extracted automatically.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP9
Abstract: In this paper, we present a method (Action-Fusion) for human action recognition from depth maps and posture data using convolutional neural networks (CNNs). Two input descriptors are used for action representation. The first input is a depth motion image that accumulates consecutive depth maps of a human action, whilst the second input is a proposed moving joints descriptor which represents the motion of body joints over time. In order to maximize feature extraction for accurate action classification, three CNN channels are trained with different inputs. The first channel is trained with depth motion images (DMIs), the second channel is trained with both DMIs and moving joint descriptors together, and the third channel is trained with moving joint descriptors only. The action predictions generated from the three CNN channels are fused together for the final action classification. We propose several fusion score operations to maximize the score of the right action. The experiments show that the results of fusing the output of three channels are better than using one channel or fusing two channels only. Our proposed method was evaluated on three public datasets: 1) Microsoft action 3-D dataset (MSRAction3D); 2) University of Texas at Dallas-multimodal human action dataset; and 3) multimodal action dataset (MAD) dataset. The testing results indicate that the proposed approach outperforms most of existing state-of-the-art methods, such as histogram of oriented 4-D normals and Actionlet on MSRAction3D. Although MAD dataset contains a high number of actions (35 actions) compared to existing action RGB-D datasets, this paper surpasses a state-of-the-art method on the dataset by 6.84%.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP10
Abstract: This paper presents fast categorization or classification of images on an animal dataset using different classification algorithm in combination with manifold learning algorithms. The paper will focus on comparing the effects of different non-linear dimensionality reduction algorithms on speed and accuracy of different classification algorithms. It examines how manifold learning algorithms can improve classification speed by reducing the number of features in the vector representation of images while keeping the classification accuracy high.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP11
Abstract: A biometric framework offers programmed recognizable proof of a person in light of the special component or trademark which is being controlled by the person. Various biometric strategies exists in the present situation, viz., unique mark, iris, confront, and so forth... The iris division is a standout amongst the most prevailing sort utilized as a part of all Aadhar card applications and has its own real applications in the field of reconnaissance and also in security purposes. The execution of the iris acknowledgment frameworks depends vigorously on the element extraction, histogram, and division with standardization procedures. In this overview paper, a concise survey of the examination work that will be done is introduced.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP12
Abstract: A real-world animal biometric system that detects and describes animal life in image and video data is an emerging subject in machine vision. These systems develop computer vision approaches for the classification of animals. A novel method for animal face classification based on score-level fusion of recently popular convolutional neural network (CNN) features and appearance-based descriptor features is presented. This method utilises a score-level fusion of two different approaches; one uses CNN which can automatically extract features, learn and classify them; and the other one uses kernel Fisher analysis (KFA) for its feature extraction phase. The proposed method may also be used in other areas of image classification and object recognition. The experimental results show that automatic feature extraction in CNN is better than other simple feature extraction techniques (both local- and appearance-based features), and additionally, appropriate score-level combination of CNN and simple features can achieve even higher accuracy than applying CNN alone. The authors showed that the score-level fusion of CNN extracted features and appearance-based KFA method have a positive effect on classification accuracy. The proposed method achieves 95.31% classification rate on animal faces which is significantly better than the other state-of-the-art methods.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP13
Abstract: Smile detection from unconstrained facial images is a specialized and challenging problem. As one of the most informative expressions, smiles convey basic underlying emotions, such as happiness and satisfaction, and leads to multiple applications, such as human behavior analysis and interactive controlling. Compared to the size of databases for face recognition, far less labeled data is available for training smile detection systems. This paper proposes an efficient transfer learning-based smile detection approach to leverage the large amount of labeled data from face recognition datasets and to alleviate overfitting on smile detection. A well-trained deep face recognition model is explored and fine-tuned for smile detection in the wild, unlike previous works which use either hand-engineered features or train deep convolutional networks from scratch. Three different models are built as a result of fine-tuning the face recognition model with different inputs, including aligned, unaligned and grayscale images generated from the GENKI-4K dataset. Experiments show that the proposed approach achieves improved state-of-the-art performance. Robustness of the model to noise and blur artifacts is also evaluated in this paper.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP14
Abstract: In order to improve the biological characteristic analysis and recognition efficiency, biometric image analysis method is used for biological recognition, the design of biometric identification software is carried out to improve the dynamic recognition ability of biological images, a design method of biometric recognition software based on image processing is proposed. First of all, the infrared optical scanning technology is used to scan the biological images, and the edge contour features of the collected biological images are extracted. Texture feature information is extracted from biological images, the extracted biometric information is input into the embedded software system. In the design of biometric identification software, biological feature recognition software uses real time trigger PXI-6713 for biological image acquisition, software development is taken under the embedded Linux drive kernel, the software system of biometric identification is developed by using DSP and logic programmable PLC, and the optimization design of biometric software is completed. The test results show that the biometric identification system designed in this paper has a good ability to identify biological images, and the accuracy of biometric recognition is high.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP15
Abstract: To detect illegal copies of copyrighted images, recent copy detection methods mostly rely on the bag-of-visual-words (BOW) model, in which local features are quantized into visual words for image matching. However, both the limited discriminability of local features and the BOW quantization errors will lead to many false local matches, which make it hard to distinguish similar images from copies. Geometric consistency verification is a popular technology for reducing the false matches, but it neglects global context information of local features and thus cannot solve this problem well. To address this problem, this paper proposes a global context verification scheme to filter false matches for copy detection. More specifically, after obtaining initial scale invariant feature transform (SIFT) matches between images based on the BOW quantization, the overlapping region-based global context descriptor (OR-GCD) is proposed for the verification of these matches to filter false matches. The OR-GCD not only encodes relatively rich global context information of SIFT features but also has good robustness and efficiency. Thus, it allows an effective and efficient verification. Furthermore, a fast image similarity measurement based on random verification is proposed to efficiently implement copy detection. In addition, we also extend the proposed method for partial-duplicate image detection. Extensive experiments demonstrate that our method achieves higher accuracy than the state-of-the-art methods, and has comparable efficiency to the baseline method based on the BOW quantization.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP16
Abstract: Unconstrained face recognition is still an open problem as the state-of-the-art algorithms have not yet reached high recognition performance in real-world environments. This paper addresses this problem by proposing a new approach called sparse fingerprint classification algorithm (SFCA). In the training phase, for each enrolled subject, a grid of patches is extracted from each subjects face images in order to construct representative dictionaries. In the testing phase, a grid is extracted from the query image and every patch is transformed into a binary sparse representation using the dictionary, creating a fingerprint of the face. The binary coefficients vote for their corresponding classes and the maximum-vote class decides the identity of the query image. Experiments were carried out on seven widely-used face databases. The results demonstrate that when the size of the data set is small or medium (e.g., the number of subjects is not greater than one hundred), SFCA is able to deal with a larger degree of variability in ambient lighting, pose, expression, occlusion, face size, and distance from the camera than other current state-of-the-art algorithms.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP17
Abstract: Biometric access control systems are becoming more commonplace in society. However, these systems are susceptible to replay attacks. During a replay attack, an attacker can capture packets of data that represents an individuals biometric. The attacker can then replay the data and gain unauthorized access into the system. Traditional password based systems have the ability to use a one-time password scheme. This allows for a unique password to authenticate an individual and it is then disposed. Any captured password will not be effective. Traditional biometric systems use a single feature extraction method to represent an individual, making captured data harder to change than a password. There are hashing techniques that can be used to transmute biometric data into a unique form, but techniques like this require some external dongle to work successfully. The proposed technique in this work can uniquely represent individuals with each access attempt. The amount of unique representations will be further increased by a genetic feature selection technique that uses a unique subset of biometric features. The features extracted are from an improved genetic-based extraction technique that performed well on periocular images. The results in this manuscript show that the improved extraction technique coupled with the feature selection technique has an improved identification performance compared with the traditional genetic based extraction approach. The features are also shown to be unique enough to determine a replay attack is occurring, compared with a more traditional feature extraction technique.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP18
Abstract: Traditional password conversion scheme for user authentication is to transform the passwords into hash values. These hash-based password schemes are comparatively simple and fast because those are based on text and famed cryptography. However, those can be exposed to cyber-attacks utilizing password by cracking tool or hash-cracking online sites. Attackers can thoroughly figure out an original password from hash value when that is relatively simple and plain. As a result, many hacking accidents have been happened predominantly in systems adopting those hash-based schemes. In this work, we suggest enhanced password processing scheme based on image using visual cryptography (VC). Different from the traditional scheme based on hash and text, our scheme transforms a user ID of text type to two images encrypted by VC. The user should make two images consisted of subpixels by random function with SEED which includes personal information. The server only has users ID and one of the images instead of password. When the user logs in and sends another image, the server can extract ID by utilizing OCR (Optical Character Recognition). As a result, it can authenticate user by comparing extracted ID with the saved one. Our proposal has lower computation, prevents cyber-attack aimed at hash-cracking, and supports authentication not to expose personal information such as ID to attackers.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP19
Abstract: Human action recognition has been well explored in applications of computer vision. Many successful action recognition methods have shown that action knowledge can be effectively learned from motion videos or still images. For the same action, the appropriate action knowledge learned from different types of media, e.g., videos or images, may be related. However, less effort has been made to improve the performance of action recognition in videos by adapting the action knowledge conveyed from images to videos. Most of the existing video action recognition methods suffer from the problem of lacking sufficient labeled training videos. In such cases, over-fitting would be a potential problem and the performance of action recognition is restrained. In this paper, we propose an adaptation method to enhance action recognition in videos by adapting knowledge from images. The adapted knowledge is utilized to learn the correlated action semantics by exploring the common components of both labeled videos and images. Meanwhile, we extend the adaptation method to a semi-supervised framework which can leverage both labeled and unlabeled videos. Thus, the over-fitting can be alleviated and the performance of action recognition is improved. Experiments on public benchmark datasets and real-world datasets show that our method outperforms several other state-of-the-art action recognition methods.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP20
Abstract: Photo sharing is an attractive feature which popularizes online social networks (OSNs). Unfortunately, it may leak users privacy if they are allowed to post, comment, and tag a photo freely. In this paper, we attempt to address this issue and study the scenario when a user shares a photo containing individuals other than himself/herself (termed co-photo for short). To prevent possible privacy leakage of a photo, we design a mechanism to enable each individual in a photo be aware of the posting activity and participate in the decision making on the photo posting. For this purpose, we need an efficient facial recognition (FR) system that can recognize everyone in the photo. However, more demanding privacy setting may limit the number of the photos publicly available to train the FR system. To deal with this dilemma, our mechanism attempts to utilize users private photos to design a personalized FR system specifically trained to differentiate possible photo co-owners without leaking their privacy. We also develop a distributed consensus-based method to reduce the computational complexity and protect the private training set. We show that our system is superior to other possible approaches in terms of recognition ratio and efficiency. Our mechanism is implemented as a proof of concept Android application on Facebooks platform.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP21
Abstract: In medical diagnostic application, early defect detection is a crucial task as it provides critical insight into diagnosis. Medical imaging technique is actively developing field in engineering. Magnetic Resonance imaging (MRI) is one those reliable imaging techniques on which medical diagnostic is based upon. Manual inspection of those images is a tedious job as the amount of data and minute details are hard to recognize by the human. For this automating those techniques are very crucial. In this paper, we are proposing a method which can be utilized to make tumor detection easier. The MRI deals with the complicated problem of brain tumor detection. Due to its complexity and variance getting better accuracy is a challenge. Using Adaboost machine learning algorithm we can improve over accuracy issue. The proposed system consists of three parts such as PreIn medical diagnostic application, early defect detection is a crucial task as it provides critical insight into diagnosis. Medical imaging technique is actively developing field inengineering. Magnetic Resonance imaging (MRI) is one those reliable imaging techniques on which medical diagnostic is based upon. Manual inspection of those images is a tedious job as the amount of data and minute details are hard to recognize by the human. For this automating those techniques are very crucial. In this paper, we are proposing a method which can be utilized to make tumor detection easier. The MRI deals with the complicated problem of brain tumor detection. Due to its complexity and variance getting better accuracy is a challenge. Using Adaboost machine learning algorithm we can improve over accuracy issue. The proposed system consists of three parts such as Preprocessing, Feature extraction and Classification. Preprocessing has removed noise in the raw data, for feature extraction we used GLCM (Gray Level Co- occurrence Matrix) and for classification boosting technique used (Adaboost).processing, Feature extraction and Classification. Preprocessing has removed noise in the raw data, for feature extraction we used GLCM (Gray Level Co- occurrence Matrix) and for classification boosting technique used (Adaboost).

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP22
Abstract: The accurate segmentation of lung lesions from computed tomography (CT) scans is important for lung cancer research and can offer valuable information for clinical diagnosis and treatment. However, it is challenging to achieve a fully automatic lesion detection and segmentation with acceptable accuracy due to the heterogeneity of lung lesions. Here, we propose a novel toboggan based growing automatic segmentation approach (TBGA) with a three-step framework, which are automatic initial seed point selection, multi-constraints 3D lesion extraction and the final lesion refinement. The new approach does not require any human interaction or training dataset for lesion detection, yet it can provide a high lesion detection sensitivity (96.35%) and a comparable segmentation accuracy with manual segmentation (P > 0.05), which was proved by a series assessments using the LIDC-IDRI dataset (850 lesions) and in-house clinical dataset (121 lesions). We also compared TBGA with commonly used level set and skeleton graph cut methods, respectively. The results indicated a significant improvement of segmentation accuracy . Furthermore, the average time consumption for one lesion segmentation was under 8 s using our new method. In conclusion, we believe that the novel TBGA can achieve robust, efficient and accurate lung lesion segmentation in CT images automatically.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP23
Abstract: PassBYOP is a new graphical password scheme for public terminals that replaces the static digital images typically used in graphical password systems with personalized physical tokens, herein in the form of digital pictures displayed on a physical user-owned device such as a mobile phone. Users present these images to a system camera and then enter their password as a sequence of selections on live video of the token. Highly distinctive optical features are extracted from these selections and used as the password. We present three feasibility studies of PassBYOP examining its reliability, usability, and security against observation. The reliability study shows that image-feature based passwords are viable and suggests appropriate system thresholds - password items should contain a minimum of seven features, 40% of which must geometrically match originals stored on an authentication server in order to be judged equivalent. The usability study measures task completion times and error rates, revealing these to be 7.5 s and 9%, broadly comparable with prior graphical password systems that use static digital images. Finally, the security study highlights PassBYOPs resistance to observation attack - three attackers are unable to compromise a password using shoulder surfing, camera-based observation, or malware. These results indicate that PassBYOP shows promise for security while maintaining the usability of current graphical password schemes.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP24
Abstract: This paper presents an online highly accurate system for automatic number plate recognition (ANPR) that can be used as a basis for many real-world ITS applications. The system is designed to deal with unclear vehicle plates, variations in weather and lighting conditions, different traffic situations, and high-speed vehicles. This paper addresses various issues by presenting proper hardware platforms along with real-time, robust, and innovative algorithms. We have collected huge and highly inclusive data sets of Persian license plates for evaluations, comparisons, and improvement of various involved algorithms. The data sets include images that were captured from crossroads, streets, and highways, in day and night, various weather conditions, and different plate clarities. Over these data sets, our system achieves 98.7%, 99.2%, and 97.6% accuracies for plate detection, character segmentation, and plate recognition, respectively. The false alarm rate in plate detection is less than 0.5%. The overall accuracy on the dirty plates portion of our data sets is 91.4%. Our ANPR system has been installed in several locations and has been tested extensively for more than a year. The proposed algorithms for each part of the system are highly robust to lighting changes, size variations, plate clarity, and plate skewness. The system is also independent of the number of plates in captured images. This system has been also tested on three other Iranian data sets and has achieved 100% accuracy in both detection and recognition parts. To show that our ANPR is not language dependent, we have tested our system on available English plates data set and achieved 97% overall accuracy.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP25
Abstract: Heterogeneous face recognition (HFR) aims to identify a person from different facial modalities, such as visible and near-infrared images. The main challenges of HFR lie in the large modality discrepancy and insufficient training samples. In this paper, we propose the mutual component convolutional neural network (MC-CNN), a modal-invariant deep learning framework, to tackle these two issues simultaneously. Our MC-CNN incorporates a generative module, i.e., the mutual component analysis (MCA), into modern deep CNNs by viewing MCA as a special fully connected (FC) layer. Based on deep features, this FC layer is designed to extract modal-independent hidden factors and is updated according to maximum likelihood analytic formulation instead of back propagation which prevents overfitting from limited data naturally. In addition, we develop an MCA loss to update the network for modal-invariant feature learning. Extensive experiments show that our MC-CNN outperforms several fine-tuned baseline models significantly. Our methods achieve the state-of-the-art performance on the CASIA NIR-VIS 2.0, CUHK NIR-VIS, and IIIT-D Sketch datasets.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP26
Abstract: The development of automation and electrification, autonomous robotics and vehicles has a wide range of use in the space science and self-driving cars. An important part in the autonomous vehicles is navigation system. In the past decades using vision based systems guidance system was become more popular. By taking the tire and road interaction from the vehicles automatically the terrain has to be classified these type of technique was using in the rovers and path finder now it came to the automobiles.In this project a deep learning technique is used to detect the curved path in autonomous vehicles. In this paper a customized lane detection algorithm was implemented to detect the curvature of the lane. A ground truth labelling tool box for deep learning is used to detect the curved path in autonomous vehicle. By mapping point to point in each frame 80-90% computing efficiency and accuracy is achieved in detecting path.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP27
Abstract: The timely identification and early prevention of crop diseases are essential for improving production. In this paper, deep convolutional-neural-network (CNN) models are implemented to identify and diagnose diseases in plants from their leaves, since CNNs have achieved impressive results in the field of machine vision. Standard CNN models require a large number of parameters and higher computation cost. In this paper, we replaced standard convolution with depth=separable convolution, which reduces the parameter number and computation cost. The implemented models were trained with an open dataset consisting of 14 different plant species, and 38 different categorical disease classes and healthy plant leaves. To evaluate the performance of the models, different parameters such as batch size, dropout, and different numbers of epochs were incorporated. The implemented models achieved a disease-classification accuracy rates of 98.42%, 99.11%, 97.02%, and 99.56% using InceptionV3, InceptionResNetV2, MobileNetV2, and EfficientNetB0, respectively, which were greater than that of traditional handcrafted-feature-based approaches. In comparison with other deep-learning models, the implemented model achieved better performance in terms of accuracy and it required less training time. Moreover, the MobileNetV2 architecture is compatible with mobile devices using the optimized parameter. The accuracy results in the identification of diseases showed that the deep CNN model is promising and can greatly impact the efficient identification of the diseases, and may have potential in the detection of diseases in real-time agricultural system

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP28
Abstract: The recent technological advancements in the field of Image Processing and Natural Language Processing are focusing on developing smart systems to improve the quality of life. In this work, an effective approach is suggested for text recognition and extraction from images and text to speech conversion. The incoming image is firstly enhanced by employing gray scale conversion. Afterwards, the text regions of the enhanced image are detected by employing the Maximally Stable External Regions (MSER) feature detector. The next step is to apply geometric filtering in combination with stroke width transform (SWT) to efficiently collect and filter text regions in an image. The non-text MSERs are removed by using geometric properties and stroke width transform. Subsequently, individual letter/alphabets are grouped to detect text sequences, which are then fragmented into words. Finally, Optical Character Recognition (OCR) is employed to digitize the words. In the final step we feed the detected text into our text-to-speech synthesizer (TTS) for text to speech conversion. The proposed algorithm is tested on images from documents to natural scenes. Promising results have been reported which prove the accuracy and robustness of the proposed framework and encourage its practical implementation in real world applications.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP29
Abstract: Our Eyes encourages us to identify the face and perceive the individuals. For dazzle and outwardly tested individuals, particularly if the individuals eye veins and retina are harmed, it gets unfit to do as such. The innovation has been advancing furiously and with each new development of science and innovation, the simplicity of living is expanding yet barely it tackles the mans physical powerlessness. Building up fixing for the blind and outwardly impeded individuals is definitely not a newly advanced issue. In any case, building up a computer-based answer for such a goal is creating a zone. The computer vision zone is on its way to arrive at the top in creating the vision for robots yet not a replacement for human sight. The target of this framework is to help the client to recognize the individuals without the assistance of a third individual or without the individual being presented himself. In spite of the fact that there are different works utilizing computer vision procedures yet there is no current technique that comprehends all the essential prerequisites for a visually impaired individual. All the current frameworks are grown distinctly for a particular need. In this paper, we set forward another hypothetical plan which joins the significant essential capacity and included some additional capacities for supporting outwardly hindered individuals. This new framework may help some of the crucial issues of blind people that are as yet present. Likewise, we delineate troubles which we are countering during the current research which may require further innovative work. We have built up a face acknowledgment application in raspberry pi with help for sound yield and report the outcomes to the client (daze individual), which tackle that the proposed strategy is promising for constant usage.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP30
Abstract: In this paper, we tackle the problem of car license plate detection and recognition in natural scene images. We propose a unified deep neural network, which can localize license plates and recognize the letters simultaneously in a single forward pass. The whole network can be trained end-to-end. In contrast to existing approaches which take license plate detection and recognition as two separate tasks and settle them step by step, our method jointly solves these two tasks by a single network. It not only avoids intermediate error accumulation but also accelerates the processing speed. For performance evaluation, four data sets including images captured from various scenes under different conditions are tested. Extensive experiments show the effectiveness and the efficiency of our proposed approach

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP31
Abstract: Convolutional neural network (CNN) depicts the transformation of the input image through a series of convolutions and other non-linear phases for recognition and classification purposes. This method has gained popularity for its significant contributions to computer vision applications and the improvement of the state-of-the-art. With CNNs, the softmax loss is used as the traditional loss function. This loss function allows deep features of distinct classes to be separated and promote the effective training of deep neural networks. An improvement on CNNs discriminative power for face recognition was recently reported, where softmax and center loss were jointly used as supervisory a loss signal. In this paper, it is shown that such a supervisory loss function is not optimal in human activity recognition, and hence a new likelihood regularization term aimed at improving the feature discriminative power of the CNN models. This regularization term is modeled from Bayesian distribution for the posterior estimation of class probability density. The regularization term is shown to improve different class discrimination, and it is capable of maximizing the distance between different classes and minimizing distances within the same class in human activity recognition. The results obtained on the KTH and Weizmann datasets were encouraging.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP32
Abstract: It is well known that blink, yawn, and heart rate changes give clue about a humans mental state, such as drowsiness and fatigue. In this paper, image sequences, as the raw data, are captured from smart phones which serve as non-contact optical sensors. Video streams containing subjects facial region are analyzed to identify the physiological sources that are mixed in each image. We then propose a method to extract blood volume pulse and eye blink and yawn signals as multiple independent sources simultaneously by multi-channel second-order blind identification (SOBI) without any other sophisticated processing, such as eye and mouth localizations. An overall decision is made by analyzing the separated source signals in parallel to determine the drivers driving state. The robustness of the proposed method is tested under various illumination contexts and a variety of head motion modes. Experiments on 15 subjects show that the multi-channel SOBI presents a promising framework to accurately detect drowsiness by merging multi-physiological information in a less complex way.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP33
Abstract: Random walk is a popular and efficient algorithm for image segmentation, especially for extracting regions of interest (ROIs). One difficulty with the random walk algorithm is the requirement for solving a huge sparse linear system when applied to large images. Another limitation is its sensitivity to seeds distribution, i.e., the segmentation result depends on the number of seeds as well as their placement, which puts a burden on users. In this paper, we first propose a continuous random walk model with explicit coherence regularization (CRWCR) for the extracted ROI, which helps to reduce the seeds sensitivity, so as to reduce the user interactions. Then, a very efficient algorithm to solve the CRWCR model will be developed, which helps to remove the difficulty of solving huge linear systems. Our algorithm consists of two stages: initialization by performing one-dimensional random walk sweeping based on user-provided seeds, followed by the alternating direction scheme, i.e., Peaceman-Rachford scheme for further correction. The first stage aims to provide a good initial guess for the ROI, and it is very fast since we just solve a limited number of one-dimensional random walk problems. Then, this initial guess is evolved to the ideal solution by applying the second stage, which should also be very efficient since it fits well for GPU computing, and 10 iterations are usually sufficient for convergence. Numerical experiments are provided to validate the proposed model as well as the efficiency of the two-stage algorithm.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP34
Abstract: Defocus blur detection is an important and challenging task in computer vision and digital imaging fields. Previous work on defocus blur detection has put a lot of effort into designing local sharpness metric maps. This paper presents a simple yet effective method to automatically obtain the local metric map for defocus blur detection, which based on the feature learning of multiple convolutional neural networks (ConvNets). The ConvNets automatically learn the most locally relevant features at the super-pixel level of the image in a supervised manner. By extracting convolution kernels from the trained neural network structures and processing it with principal component analysis, we can automatically obtain the local sharpness metric by reshaping the principal component vector. Meanwhile, an effective iterative updating mechanism is proposed to refine the defocus blur detection result from coarse to fine by exploiting the intrinsic peculiarity of the hyperbolic tangent function. The experimental results demonstrate that our proposed method consistently performed better than the previous state-of-the-art methods.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: IP35
Abstract: The land cover reconstruction from monochromatic historical aerial images is a challenging task that has recently attracted an increasing interest from the scientific community with the proliferation of large-scale epidemiological studies involving retrospective analysis of spatial patterns. However, the efforts made by the computer vision community in remote-sensing applications are mostly focused on prospective approaches through the analysis of high-resolution multi-spectral data acquired by the advanced spatial programs. Hence, four contributions are proposed in this paper. They aim at providing a comparison basis for the future development of computer vision algorithms applied to the automation of the land cover reconstruction from monochromatic historical aerial images. First, a new multi-scale multi-date dataset composed of 4.9 million non-overlapping annotated patches of the France territory between 1970 and 1990 has been created with the help of geography experts. This dataset has been named HistAerial. Second, an extensive comparison study of the state-of-the-art texture features extraction and classification algorithms, including deep convolutional neural networks (DCNNs), has been performed. It is presented in the form of an evaluation. Third, a novel low-dimensional local texture filter named rotated-corner local binary pattern (R-CRLBP) is presented as a simplification of the binary gradient contours filter through the use of an orthogonal combination representation. Finally, a novel combination of low-dimensional texture descriptors, including the R-CRLBP filter, is introduced as a light combination of local binary patterns (LCoLBPs). The LCoLBP filter achieved state-of-the-art results on the HistAerial dataset while conserving a relatively low-dimensional feature vector space compared with the DCNN approaches (17 times shorter).

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Project Code: RBIM100
Abstract: Biometric access control systems are becoming more commonplace in society. However, these systems are susceptible to replay attacks. During a replay attack, an attacker can capture packets of data that represents an individual’s biometric. The attacker can then replay the data and gain unauthorized access into the system. Traditional password based systems have the ability to use a one-time password scheme. This allows for a unique password to authenticate an individual and it is then disposed. Any captured password will not be effective. Traditional biometric systems use a single feature extraction method to represent an individual, making captured data harder to change than a password. There are hashing techniques that can be used to transmute biometric data into a unique form, but techniques like this require some external dongle to work successfully. The proposed technique in this work can uniquely represent individuals with each access attempt. The amount of unique representations will be further increased by a genetic feature selection technique that uses a unique subset of biometric features. The features extracted are from an improved geneticbased extraction technique that performed well on periocular images. The results in this manuscript show that the improved extraction technique coupled with the feature selection technique has an improved identification performance compared with the traditional genetic based extraction approach. The features are also shown to be unique enough to determine a replay attack is occurring, compared with a more traditional feature extraction technique.

For IEEE Paper & Synopsis contact: 9886468236 / 7022390800

Software Project List


Final Year Projects for the Academic Year 2025. For Project Synopsis Click here or Call 9886468236 / 7022390800

Latest 2024 IEEE based final year project | Artificial Intelligence Projects | embedded systems project center in Bangalore | Machine Learning Projects bangalore | Best project consultancy in Bangalore |Internship projects | Final year project Institute in Bangalore| Internship project center| Internship project institute in Bangalore| Internship project company in Bangalore | Top final year project center in Bangalore | Latest 2023-2024 IEEE based project implementation consultancy in Bangalore | project institutes in yelahanka Bangalore| mtech projects in Bangalore | best project consultancy for final year students in Bangalore | academic 2024 IEEE projects | final year project for BE | project institutes in vijayanagar | BTech engineering students in Bangalore | Final year project center | internship with a final year project in Bangalore | final year Project centrs in bangalore| Mitron supports project centers in chikkamagaluru | Davanagere | Kolar | doddaballapur | Ballari | Hassan | Hubli | Tumkur| IEEE Project centers in RT nagar | academic project for final semester students in Bangalore| MTech project with internship in Bangalore | MCA | MSc | BCA | Diploma projects in Bangalore | IEEE latest project consultancy in Bangalore | Best project consultancy in Bangalore. Top Embedded training centers in Bangalore | Best embedded training centers in Bangalore | iot project center in RT Nagar | project institutes in yelahanka | Industrial training centers in Bangalore BCA projects in bangalore @ yelahanka | Robotics training centers in Bangalore | Project Institutes in vijayanagar | Workshop for Embedded In Bangalore | project consultancy in vijayanagar | ECE project center in RT nagar | Machine learning projects in Bangalore @ yelahanka | CSE project center in bangalore | ISE project center in bangalore | Artificial Intelligence Projects in Bangalore | ECE project center in bangalore yelahanka | CSE PROJECTS near hebbal | EEE project institute in bangalore near yelahanka | Project centers in H | Embedded projects in bangalore | project institutes in rajajinagar | jayanagar | vijayanagar | KR Puram | Mathikere | SP Road | RR NAGAR bangalore| Deep Neural Network Hebbal in Bangalore near RT nagar| Academic latest final year ieee projects for CSE|ISE| ECE | EEE| IT | MECHANICAL | TELECOMMUNICATION | MTech | MCA | BCA | Latest 2017-2018 ieee java projects for final year engineering students | BE | Diploma Final Year CSE | ECE | Mechanical | EEE Project Institute Center | project Consultancy in Bangalore.

9886468236

9886468236