Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Manisha, U.K.D.N."

Filter results by typing the first few letters
Now showing 1 - 3 of 3
  • Results Per Page
  • Sort Options
  • Thumbnail Image
    Item
    Driver Assist Traffic Signs Detection and Recognition System
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Manisha, U.K.D.N.; Liyanage, S.R.
    Traffic signs or road signs are signs that are initiated at the roads to provide information of the overcoming behavior of the road to drivers and pedestrians. Since 1930s with the increment of the use of vehicles, road signs were introduced in Europe. Latterly many countries have adopted them to standardize their signs to enhance the safety of road users. Since the number of vehicles is an increasing factor in the world, road traffic became an increasing factor. Specially in urban areas the pedestrian activities at the road is generally high along with the road traffic. It is possible that drivers may lose their concentration to the traffic signs because of closing vehicles and pedestrian activities. There are many notification boards with various colors and textures at road sides. This also may cause the problem that hard to detect the traffic signs clearly to the eyes. Violating traffic signs may cause drivers to make accidents and also unnecessary problems like penalties from the law. To ensure more safety and convenient drive, automation of traffic signs recognition took apart. Computer Vision is a promising approach for addressing this problem which is an interdisciplinary field that emphasis, how the computers can be made to gain high level understanding from digital images. First automated traffic signs recognition was reported in Japan in 1984. Since then number of methods have been developed for traffic signs detection and recognition. This paper presents ‘Driver Assist Traffic Signs Detection and Recognition System’ which is capable of detecting, recognizing and indicating traffic signs at the road side to the driver to ensure a safety and convenient drive by acknowledging the behavior of the road. The proposed system mainly consists with two phases which are detection phase and recognition phase. In both phases I have used classifiers with different technologies which are computer vision image processing techniques and machine learning techniques respectively. In detection phase I have used a cascade classifier to analyze the each frame of the input to find traffic signs of it. For the purpose of training the classifier I have provided over 3000 positive samples of images with region of interests (ROIs) which includes traffic signs and provided over 15000 negative samples of images which does not include any traffic signs. Haar-like features of the images were used to train the classifier with a proper false alarm rate. Aspect ratio changes for most of 3D objects with the location of the camera. Since the classifier is very sensitive to the aspect ratio of the traffic sign I have to use many training images as possible to achieve almost all the orientations of traffic signs to the training set of images. The main objective of the detection phase is to classify the presence of traffic signs and return the coordinates of the sign for each frame. In recognition phase I have used machine learning techniques to train a category classifier support vector machine (SVM) to recognize and indicate the detected traffic signs by the detector. Histogram of Oriented Gradient (HOG) features were used to train the SVM by extracting the features from the training sets and stores them in separate classes as separate categories. For each coordinate that returned by the detector, used to crop the original frame and make an input image to the category classifier. For each input image the category classifier gives a separate score for each category by matching the HOG features of the image. The highest score gives the nearest category and I have obtained an optimal score value to ensure the accuracy of the recognition phase. The main objective of the recognition phase is to choose the correct category of the detected traffic sign by the detector and indicates the traffic sign category. In the detection phase I used LBP and HOG as the feature extraction methods along with the Haar like feature and obtained that the higher accurate technique is to use Haar like features. In recognition phase I chose 11 categories of traffic signs for the training process. I have obtained an optimal value of -0.04 as the score for the best accuracy of the recognition phase. The proposed system can detect, recognize and indicates traffic signs with great accuracy not only at the daylight but at night also and can be implemented to use in any vehicle. Detection process achieves over 88% accuracy and in recognition process accuracy of classify the category of a detected sign is over 98%. In real time testing overall system achieves over 88% of accuracy over 45-50 km/h speed.
  • Thumbnail Image
    Item
    A Prototype P300 BCI Communicator for Sinhala Language
    (3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Manisha, U.K.D.N.; Liyanage, S.R.
    A Brain-Computer Interface (BCI) is a communication system which enables its users to send commands to a computer using only brain activities. These brain activities are generally measured by ElectroEncephaloGraphy (EEG), and processed by a system using machine learning algorithms to recognize the patterns in the EEG data. The P300-event related potential is an evoked response to an external stimulus that is observed in scalp-recorded electroencephalography (EEG). The P300 response has proven to be a reliable signal for controlling a BCI. P300 speller presents a selection of characters arranged in a matrix. The user focuses attention on one of the character cells of the matrix while each row and column of the matrix is intensified in a random sequence. The row and column intensifications that intersect at the attended cell represent the target stimuli. The rare presentation of the target stimuli in the random sequence of stimuli constitutes an Oddball Paradigm and will elicit a P300 response to the target stimuli. Emotive EPOC provides an affordable platform for BCI applications. In this study a speller application for Sinhala language characters was also developed for Emotiv users and tested. Classification of the P300 waveform was carried out using a dynamically weighted combination of classifiers. A mean letter classification accuracy of 84.53% and a mean P300 classification accuracy of 89.88% was achieved on a dataset collected from three users.
  • Thumbnail Image
    Item
    Sinhala Character Recognition using Tesseract OCR
    (3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Manisha, U.K.D.N.; Liyanage, S.R.
    In Sri Lanka, there are many fields that uses Sinhala scripts, such as newspaper editors, writers, postal and application processes. In these fields there have only a scanned or printed copies of Sinhala script, where they have to enter them manually to a computerized system, which consumes much time and cost. The proposed method was consisted of two areas as image pre-processing and training the OCR classifier. In Image pre-processing, the scanned images were enhanced and binarized using image processing techniques such as grayscale conversion and binarization using local thresholding. In order to avoid distortions of scanned images such as water marks and highlights was removed through the grayscale conversion with color intensity changes. In the OCR training, the Tesseract OCR engine was used to create the Sinhala language data file and used the data file with a customized function to detect Sinhala characters in scanned documents. OCR engine was primarily used to create a language data file. First, pre-processed images were segmented (white letters in black background) using local adaptive thresholding where performing Otsu’s thresholding algorithm to separate the text from the background. Then page layout analysis was performed to identify non-text areas such as images, as well as multi-column text into columns. Then used detections of baselines and words by using blob analysis where each blob was sorted using the x-coordinate (left edge of the blob) as the sort key which enables to track the skew across the page. After the separation of each character, then labeled manually into Sinhala language characters. By using the Sinhala language data file into OCR function, it returns the recognized text, the recognition confidence, and the location of the text in the original image. By considering the recognition confidence of each word it is possible to control the accuracy of the system. The classifier was trained using 40 characters sets with 20 images from each character and tested using over 1000 characters (200 words) with variations of font sizes and achieved approximately 97% of accuracy. The elapsed time was less than 0.05 per a line with more than 20 words, which was a higher improvement than a manual data entering. Since the classifier can be retrained using testing images, it can be developed to achieve active learning.

DSpace software copyright © 2002-2025 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify