ICACT 2018

Permanent URI for this collectionhttp://repository.kln.ac.lk/handle/123456789/18944

Browse

Search Results

Now showing 1 - 4 of 4
  • Thumbnail Image
    Item
    Mobile Biometrics: The Next Generation Authentication in CloudBased Databases
    (3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Bhatt, C.; Liyanage, S.R.
    In this period of data innovation, cell phones are generally utilized around the world for fundamental correspondences, as well as an apparatus to manage anyplace, whenever data. These situations require a high security level for individual data and protection assurance through individual distinguishing proof against un-approved use if there should be an occurrence of robbery or fake use in an organized society. At present, the most received technique is the check of Personal Identification Number (PIN), which is risky and won't not be anchored enough to meet this prerequisite. As is represented in a review (Clarke and Furnell, 2005), numerous cell phone clients view the PIN as badly arranged as a secret key that is sufficiently confounded and effortlessly overlooked and not very many clients change their PIN frequently for higher security. Subsequently, it is liked to apply biometrics for the security of cell phones and enhance dependability of remote administrations. As biometrics intends to perceive a man utilizing special highlights of human physiological or conduct attributes, for example, fingerprints, voice, confront, iris, stride and mark, this verification technique normally gives an abnormal state of security. Expectedly, biometrics works with particular gadgets, for instance, infrared camera for securing of iris pictures, increasing speed sensors for step obtaining and depends on expansive scale PC servers to perform ID calculations, which experiences a few issues including massive size, operational many-sided quality and greatly surprising expense. Adding a wireless dimension to biometric identification provides a more efficient and reliable method of identity management across criminal justice and civil markets. Yet deploying cost-effective portable devices with the ability to capture biometric identifiers – such as fingerprints and facial images – is only part of the solution. An end-to-end, standards-based approach is required to deliver operational efficiencies, optimize resources and impact the bottom line. While the use of mobile biometric solutions has evolved in step with the larger biometrics market for some time, the growing ubiquity of smartphones and the rapid and dramatic improvements in their features and performance are accelerating the trend. This is the right time to take a closer look at mobile biometrics and investigate in greater depth how they can be used to their potential. Consolidated with cutting edge detecting stages can identify physiological signals and create different signs, numerous biometric strategies could be executed on phones. This offers an extensive variety of conceivable applications. For example, individual protection assurance, versatile bank exchange benefit security, and telemedicine observation. The utilization of sensor information gathered by cell phones for biometric ID and verification is a rising boondock that must be progressively investigated. We review the state-of-the-art technologies for mobile biometrics in this research.
  • Thumbnail Image
    Item
    A Prototype P300 BCI Communicator for Sinhala Language
    (3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Manisha, U.K.D.N.; Liyanage, S.R.
    A Brain-Computer Interface (BCI) is a communication system which enables its users to send commands to a computer using only brain activities. These brain activities are generally measured by ElectroEncephaloGraphy (EEG), and processed by a system using machine learning algorithms to recognize the patterns in the EEG data. The P300-event related potential is an evoked response to an external stimulus that is observed in scalp-recorded electroencephalography (EEG). The P300 response has proven to be a reliable signal for controlling a BCI. P300 speller presents a selection of characters arranged in a matrix. The user focuses attention on one of the character cells of the matrix while each row and column of the matrix is intensified in a random sequence. The row and column intensifications that intersect at the attended cell represent the target stimuli. The rare presentation of the target stimuli in the random sequence of stimuli constitutes an Oddball Paradigm and will elicit a P300 response to the target stimuli. Emotive EPOC provides an affordable platform for BCI applications. In this study a speller application for Sinhala language characters was also developed for Emotiv users and tested. Classification of the P300 waveform was carried out using a dynamically weighted combination of classifiers. A mean letter classification accuracy of 84.53% and a mean P300 classification accuracy of 89.88% was achieved on a dataset collected from three users.
  • Thumbnail Image
    Item
    Sinhala Character Recognition using Tesseract OCR
    (3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Manisha, U.K.D.N.; Liyanage, S.R.
    In Sri Lanka, there are many fields that uses Sinhala scripts, such as newspaper editors, writers, postal and application processes. In these fields there have only a scanned or printed copies of Sinhala script, where they have to enter them manually to a computerized system, which consumes much time and cost. The proposed method was consisted of two areas as image pre-processing and training the OCR classifier. In Image pre-processing, the scanned images were enhanced and binarized using image processing techniques such as grayscale conversion and binarization using local thresholding. In order to avoid distortions of scanned images such as water marks and highlights was removed through the grayscale conversion with color intensity changes. In the OCR training, the Tesseract OCR engine was used to create the Sinhala language data file and used the data file with a customized function to detect Sinhala characters in scanned documents. OCR engine was primarily used to create a language data file. First, pre-processed images were segmented (white letters in black background) using local adaptive thresholding where performing Otsu’s thresholding algorithm to separate the text from the background. Then page layout analysis was performed to identify non-text areas such as images, as well as multi-column text into columns. Then used detections of baselines and words by using blob analysis where each blob was sorted using the x-coordinate (left edge of the blob) as the sort key which enables to track the skew across the page. After the separation of each character, then labeled manually into Sinhala language characters. By using the Sinhala language data file into OCR function, it returns the recognized text, the recognition confidence, and the location of the text in the original image. By considering the recognition confidence of each word it is possible to control the accuracy of the system. The classifier was trained using 40 characters sets with 20 images from each character and tested using over 1000 characters (200 words) with variations of font sizes and achieved approximately 97% of accuracy. The elapsed time was less than 0.05 per a line with more than 20 words, which was a higher improvement than a manual data entering. Since the classifier can be retrained using testing images, it can be developed to achieve active learning.
  • Thumbnail Image
    Item
    EduMiner- An Automated Data Mining Tool for Intelligent Mining of Educational Data
    (3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Kasthuriarachchi, K.T.S.; Liyanage, S.R.
    Data mining is a computer based information system that is devoted to scan huge data repositories, generate information and discover knowledge. Data mining pursues to find out patterns in data, organize information of hidden relationships, structure association rules and many more operations which cannot be performed using classic computer based information systems. Therefore, data mining outcomes represent a valuable support for decisions making in various industries. Data mining in education is not a novel area but, lives in its summer season. Educational data mining emerges as a paradigm oriented to design models, tasks, methods, and algorithms for exploring data from educational settings. It finds the patterns and make predictions that characterize learners’ behaviors and achievements, domain knowledge content, assessments, educational functionalities, and applications. Educators and non-data mining experts are using different data mining tools to perform mining tasks on learners’ data. There are a few tools available to carry out educational data mining tasks. However, they have several limitations. Their main issue is difficulty to use by non- data mining experts/ educators. Therefore, an automated tool is required that satisfies the data mining needs of different users. The “EduMiner” is introduced to make important predictions about students in the education domain using data mining techniques. R studio, R Shiny, data mining algorithms and several key functionalities of Knowledge Discovery in Databases have been used in the development of “EduMiner”. The functionalities of the tool are very user-friendly and simple for novice users. The user has to configure the tool and provide the appropriate inputs for parameters such as the data set, the algorithms used for mining in advance to obtain the results of the analysis. The pre-processing will be done to clean the data prior to starting the analysis. The tool is capable of performing several analytical tasks. They are; student dropout prediction, student module performance prediction, module grade prediction, recommendations for students/ teachers, student enrollment criteria predictor and student grouping according to different characteristics. Apart from these features, the tool will consist of an intelligent execution of data analysis tasks with real time data as a background service. Finally, the results of the analysis are evaluated and visualized in order to easily understand by the user. Users of education industry can achieve a valuable gain by this tool since, it would be very user friendly to handle and easy to understand the mining results.