Browsing by Author "Dias, N.G.J."
Now showing 1 - 20 of 47
- Results Per Page
- Sort Options
Item A sales management system using cloud computing technology(Book of Abstracts, Annual Research Symposium 2014, 2014) Wijethunga, I.A.; Dias, N.G.J.At present, cloud computing technology is used in many applications because of the rapid development in network technology and the cost of transmitting a terabyte of data over long distance has extremely reduced in the past decade. Cloud computing is a technique based on distributed computing resources in pay per usage strategy. A user can access cloud services as a utility service and able to use them almost instantly. These features make cloud computing so flexible with the fact that services are accessible anywhere at any time.Item An analysis of sound parameters for prosodic modeling in Sinhala text to speech synthesis(Research Symposium 2009 - Faculty of Graduate Studies, University of Kelaniya, 2009) Dias, N.G.J.; Kumara, K.H.; Dolawattha, D.D.M.Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software and/or hardware. Text-to-Speech (TTS) is one of the speech synthesis technologies. Before a synthesizer can produce an utterance, several steps have to be completed. Among them, after computing the basic pronunciation from authographic text, prosody annotation should be performed. Finding correct intonation, stress, and duration from written text is the most challenging problem for most of the natural languages. These features together are called prosodic or suprasegmental features and may be considered as the melody, rhythm, and emphasis of the speech at the perceptual level. Unfortunately, written text usually contains very little information of these features and some of them change dynamically during speech. However, with some specific control characters this information must be given (at least some extend) to the speech synthesizer to produce enough natural speech of the target language. On the other hand timing at sentence level or grouping of words into phrases correctly is difficult; in many languages, prosodic phrasing is not always marked in text by punctuation, and phrasal accentuation is almost never marked. If there is no breath pauses in speech or if they are in wrong places, the speech may sound very unnatural or even the meaning of the sentence may be misunderstood. As an example, in Sinhala, the input string " wïu wdjo@ ” " can be spoken as three different ways changing the intonation patterns as angry, sadness and sarcastic; giving three different meanings to the listener. Here intonation means how the pitch pattern or fundamental frequency changes during speech. The prosody of continuous speech depends on many separate aspects, it may be twice as high as with male voice and with children it may be even three, such as the meaning of the sentence and the speaker characteristics and emotions. Therefore it is clear that prosody plays a major role in speech synthesis, and a deeper treatment of prosody is a must in any kind of speech synthesis. In this work, in order to develop generic models for prosodic synthesis in speech synthesis, we selected 150 possible sentences in Sinhala Language and recorded them according to the above three intonation patterns (i.e. angry, sadness and sarcastic) with a female native speaker who is a well trained person in Drama and Theater. Then we computed various speech parameters for above 150X3 sentences using PRAAT speech processing tool developed by www.praat.org. Hence we found that for all above 150 sentences there is an incremental pattern in the duration from Angry to Sarcastic. No regular pattern in Median, Mean, Standard Deviation, Minimum, and Maximum values of the Pitch parameter. Regarding the pulses, we computed the Number of pulses, Number of periods, Mean period, Standard deviation of period for each of the above sound files and we observed that there is no regular pattern in the parameter Pulses. For voicing parameter we computed the Fraction of locally unvoiced frames, Number of voice breaks and Degree of voice breaks. However for this parameter there were not regular patterns too. Then we computed the Harmonicity values as Mean autocorrelation, Mean noise-to-harmonics ratio, Mean harmonics-to-noise ratio and found that there is no regular pattern. After computing the mean-energy intensity of each sentences, we found that there is an incremental pattern in the Intensity by concerning the order Angry, Sarcastic and Sadness. Finally we computed the formant values as First formant, First Bandwidth, Second Formant, Second Bandwidth, Third formant, Third Bandwidth, fourth formant and forth bandwidth and found that there is no regular pattern in different formant parameters. Although there are no regular patterns in most of the above speech parameters, in order to develop a more natural sounding speech synthesizer, however these parameters should be annotated with basic pronunciation computed from the authograpich text in speech synthesis. Therefore in future we hope to develop more generic probabilistic models based on this analysis to model above speech parameters for Sinhala speech synthesis.Item An Android Application in Searching for Hospitals(University of Kelaniya, 2012) Chandrasena, A.M.D.; Dias, N.G.J.This research focuses on Android application development techniques needed to implement a mobile application that consists of features that can search information about hospitals with its exact or nearest location. Since there is no application available to developers to explain such techniques, this research presents such a development. We have been able to create a number of different applications where we provide the user with information regarding a place he or she wants to visit. But thes e applications are limited to desktops only. The objective of this research is to develop such an application for Android mobile devices. The application can help users to find the location of hospitals with the hospital and doctors’ information. From this application, Android users can search any hospital in the country with its exact or nearest location using the Google Maps in satellite or map view. This is an information service, accessible with Android mobile devices through the mobile network, and utilizing the ability to make use of geographical position of any hospital in the country. Also, from this application users can search for information about doctors such as day, time and the hospital that has the facility to channel them according to their specialty. Data is inserted into the database by the administrator through the developed service based web interface and then the Android application fetches that data according to the given details. This application is used as automated testing tools on the Android API to build a map for integrating Google maps to display the location of hospitals by using their coordinates. People have to face a lot of difficulties to find information about hospitals for a variety of reasons. Therefore, the Hospital Search Application for Android Mobiles is developed to find information about hospitals and doctors in order to provide a solution for people who face difficulties when they search for such service providers and places.Item Anti-Counterfeit Method for Computer Hardware using Blockchain(International Journal of Computer Applications, 2022) Britto, C.D.; Dias, N.G.J.Counterfeited computer hardware are products designed looks exactly the same as their genuine products. Most of the people are tricked by the counterfeiters using online markets. This influences the need for a secure and efficient mechanism to identify fake/counterfeited products. The proposed method is implemented using the Blockchain technology. Each Block represents a product and the hash key of that product, calculated using the specified Block attributes. The buyer details were updated by a verified retailer. Thereafter any user can check the validity of the product using the hash key and retailer name. Tampered Block is notified to the customer and then the product is invalid. This system can be upgraded by hosting the application on a web server for distribution and separating the application functions according to the user levels (Manufacturer, retailer, and buyer). Therefore, the proposed method provides a more secure and reliable way to handle computer hardware counterfeits.Item Application of written-bell discounting techniques for smoothing in part of speech tagging algorithm for Sinhala language(Faculty of Graduate Studies, University of Kelaniya, 2015) Jayaweera, M.P.; Dias, N.G.J.Item Applying Intelligent Speed Adaptation to a Road Safety Mobile Application –DriverSafeMode(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2017) Perera, W.S.C.; Dias, N.G.J.During the last decades, Sri Lanka has experienced a highly accelerated growth level of motorized transportation with the rapid urbanization due to the economic development. However, the increasing motorization has also placed a significant burden on people’s health in the form of uncontrollable growth rate of road accidents and fatalities. We have focused on excess speed and mobile distraction which are two major factors that have caused majority of road accidents. Exceeding the speed limit, which is enforced under the traffic law, increases both the risk of a road crash as well as the severity of the injuries by reducing the ability to judge the forthcoming events. Use of mobile phones distracts a driver in the means of visual, physical and cognitive. These factors are largely preventable but are unlikely; due to the lack of adequate mechanisms in existing road safety plans in Sri Lanka. Especially in rural areas, roads are poorly maintained which has led to faded, hidden, foliage obscured speed limit signs and absence of appropriate signs at vulnerable locations (schools, hospitals). Existing plans also lack alert systems to avoid drivers from using phones while driving. Proposed application uses Advisory Intelligent Speed Adaptation (ISA) to ensure drivers' compliance with legally enforced speed limits by informing the driver on vehicle speed along with speed limits and giving feedback. There exist many ISA systems deployed using various methods such as GPS, Transponders, compasses, speed sensors and map matching, based on native traffic infrastructures of other countries. Google Fused location provider API web service was used combined with GPS sensor of the smartphone to obtain continuous geo location points (latitude, longitude). Distance between two location points was calculated using Haversine Algorithm. Using the distance and time spent between two location updates, vehicle speed was calculated. Google Maps Geocoding API was used to obtain the type of road on which the driver is driving. Accepted speed limits were stored in a cloud hosted database according to each road type and vehicle type. Application establishes a connection to the database to gain the accepted speed limit whenever a new road type is detected. It compares real-time speed Vs speed limit and initiate audio and visual alerts when the vehicle speed exceeds the limit. Google Places API was used to identify schools and hospitals within 100m and informs the driver using audio and visual alerts. Application uses in-built GSM service to reject incoming calls and in-built notification service to mute distracting notifications. A test trial was carried out to evaluate the accuracy of speed detection. Mean speed of the test vehicle speedometer was 14.4122kmph (Standard Deviation=14.85891) and that of the application was 13.7488kmph (Standard Deviation=14.31279). An independent-sample t-test proved that the speed values of the test vehicle and the application are not significantly different at 5% level of significance. User experiences of 30 randomly selected test drivers were evaluated. 80% of lightmotor vehicle test drivers had stated that the application is very effective. 10% of the heavy-motor vehicle drivers and 20% of tricycle test drivers had found it difficult to perceive the audio alerts due to the noisy surrounding. Evaluations prove that the usage of the proposed system can impose a direct and positive effect on the road safety of Sri Lanka as expected.Item Automatic Segmentation Of Given Set Of Sinhala Text Into Syllables For Speech Synthesis(University of Kelaniya, 2007) Kumara, K.H.; Dias, N.G.J.; Sirisena, H.A dictionary based automatic syllabification tool has been given for Speech Synthesis in Sinhala language. This tool is also capable of providing frequency distributions of Vowels, Consonants and Syllables for a given set of Sinhala text. A method of determining syllable boundaries has also been shown. Detection of Syllable boundaries for a given Sinhala sentence is achieved by four main phases and those phases have been described with examples. Rules for the automatic segmentation of words into syllables have been derived based on a dictionary. An algorithm has been produced for the implementation of these rules which utilizes the dictionary together with an accurate mark up of the syllable boundaries.Item Automatic Segmentation of Separately Pronounced Sinhala Words into Syllables(University of Kelaniya, 2011) Priyadarshani, P.G.N.; Dias, N.G.J.Aligned corpora are widely used in various speech applications like automatic speech recognition, speech synthesis, as well as prosodic and phonetic research. The segmentation into syllables can be done manually or automatically. But it consumes significantly more time for a fully manual phonetic segmentation and practically it is a complicated task because in many cases it requires a large aligned speech corpus. If the manual syllabification is done by a group of individuals then the consistency is decreased because the analysis variations of the individuals. Consequently, there is a dire need for automatic syllabification and it is important because Sinhala language is syllable centric in nature. A method for syllabification of acoustic signals of separately pronounced Sinhala words has been given. Detecting the syllable boundaries was achieved by two main phases and those phases have been described with examples. Keywords:Item Classification and Regression Trees (CART) based Data Driven Approach for Prosody Duration Modeling in Sinhala Language(Research Symposium 2010 - Faculty of Graduate Studies, University of Kelaniya, 2010) Dolawattha, D.D.M.; Dias, N.G.J.; Kumara, K.H.A Text-to-Speech (TTS) Synthesizer or Text-to-Speech Engine is a computer based system that capable to read any text aloud with naturally. In TTS, the text might be inserted directly to the computer by an operator or an output file of an Optical Character Recognition (OCR) system of a scanned written text document. Prosody features play a major role when developing a TTS system. Getting the correct intonation, Stress and duration from written text is the most challenging problems for natural languages. The prosodic duration highly affect on machine generated synthetic speech’s naturalness and intelligibility. Here we have used different features that are automatically derived from the text and affect on the duration pattern of the natural speech to be modeled the duration. In this work, in order to develop generic models for prosodic synthesis in speech synthesis, we have selected a speech corpus of 150 possible sentences in Sinhala Language and recorded them according to the three intonation patterns angry, sadness and sarcastic with a female native speaker who is a well trained person in Drama and Theater. Both the waveform and the spectrogram were used to determine the segment (phoneme) boundaries, and the boundaries identified are confirmed by listening to the speech. Each segment in the corpora was annotated with the following features together with the actual segment duration and finally generated the CART. Identity of the current phoneme, Identity of the preceding phoneme, the features considered are the Identity of the following phoneme, Position in the parent syllable, Parent syllable initial, Parent syllable final, Parent syllable position type, Number of syllables in the parent word, Position of parent syllable in the word, Parent syllables break information, Phrase length (number of words) and Position of phrase in the utterance. Above features were observed from similar worked carried out for other languages specially Asian languages [1]. Predictions of the segmental durations were done as follows. The decision tree (CART) was traversable starting from the root node, taking various paths satisfying the conditions at intermediate nodes, till the leaf node is reached. The leaf node contains the value of segmental duration prediction.Item Comparison of Part of Speech taggers for Sinhala Language(Faculty of Graduate Studies, University of Kelaniya, Sri Lanka, 2016) Jayaweera, M.; Dias, N.G.J.Part of Speech (POS) tagging is an important tool for processing natural languages. It is one of the basic analytical model used in for many Natural language processing applications. It is the process of marking up a word in a corpus as corresponding to a particular part of speech like noun, verb, adjective and adverb. Automatic assignment of descriptors to the given tokens is called Tagging. The descriptor is called a tag. The tag may indicate one of the parts of speech category and the semantic information. So tagging is a kind of classification. The process of assigning one of the parts of speech to the given word is called parts of speech tagging. It is commonly referred to as POS tagging. In grammar, a part of speech (also known as word class, lexical class, or lexical category) is a linguistic category of words (or more precisely lexical items), which is generally defined by the syntactic or morphological behavior of the lexical item in the language. Each part of speech explains not what the word is, but how the word is used. In fact, the same word can be a noun in one sentence and a verb or adjective in another. In most of the natural languages in the world, noun and verb are common linguistic categories among others. Almost all languages have the lexical categories noun and verb, but beyond these there are significant variations in different languages. The significance of the part of speech for language processing is that it gives a significant amount of information about the word and its neighbours. There are different approaches to the problem of assigning a part of speech tag to each word of a natural language sentence. The most widely used methods for English are the statistical methods that is Hidden Markov Model (HMM) based tagging and the rule based or transformation based methods. Subsequent researches add various modifications to these basic approaches to improve the performance of the taggers for English. In this paper we present a comparison of the different researches that was carried out of POS tagging for Sinhala language. For Sinhala language, there were 4 reported work for developing a POS tagger. In 2004, a HMM based POS tagger was proposed using bigram model and reported only 60% of accuracy. Another HMM based approach was tried out for Sinhala language in 2013 and reported a 62% of accuracy. In 2016, another research was reported 72% of accuracy which was a hybrid approach based on bi-gram HMM and rules based approach in predicting the relevant tag for unknown words. The tagger that we have developed is based on a trigram based HMM approach, which used the knowledge of distribution of words and parts of speech categories in predicting the relevant tag for unknown words. The Witten-Bell discounting technique was used for smoothing and our approach gave an accuracy of 91.50% with a corpus of 90551 annotated words.Item Deep learned Visual Model for Human Computer Interaction (HCI)(Staff Development Center, University of Kelaniya, Sri Lanka, 2015) Kumarika, B.M.T.; Dias, N.G.J.Background and rationale: The modern hand gesture recognition approaches can be classified as ‘contact’ and ‘vision’ based. Contact based approaches like Data Glove require a physical contact which can cause health issues and uncomfortable for some users. In contrast, users wear nothing in vision-based approaches where camera(s) capture the images of hands interacting with computers (Dan & Mohod, 2014).Therefore, vision-based approach is simple, natural and convenient. However, challenges to be addressed include illumination change, varying sizes of hand gestures, background clutters in visual patternidentification(Symonidis K,2000). Aim:Therefore, thepractical applicationof computer vision-based hand gesture recognition systems necessitates an efficient algorithm capable ofhandling those challenges. Theoretic al underpinning / Conceptual framework: As a solution to the complexity of the problem this research proposes Deep Neural Network (DNN) as robust, deep learned visual model. Deep learning attempts to model high-level abstractions (features) in data by using a biologically inspired model. In deep learning, the visual cortex of our brain is well-studied and shows a sequence of areas each of which contains a representation of the input, and signals flow from one to the next. Thus, each level of this feature hierarchy represents the input at a different level of abstraction, with more abstract features further up in the hierarchy, defined in terms of the lower-level ones where classification will be easy. Proposedmethodology:Created database of the hand gesture images is used for training and testing. Greedy layer-wise training is used to avoid the problems of training deep net in supervised fashion such as slow training, over fitting and unlabelled data. The results will be compared with test data which is a 15% of the data set.The results of the two tests oftraditional networksand deep network willalsobecompared. Expected outcomes: This will provide a robust Deep Neural Network as an efficient visual pattern recognition algorithm for real time hand gesture recognition.Item Deep Unsupervised Pre-trained Neural Network for Human Gesture Recognition(Faculty of Graduate Studies, University of Kelaniya, 2015) Kumarika, B.M.T.; Dias, N.G.J.Recognition of visual patterns for real world applications is a complex process that involves many issues. Varying and complex backgrounds, bad lighting environments, person independent gesture recognition and the computational costs are some of the issues in this process. Since human gestures are perceived through vision, it is a subject of visual pattern recognition. Hand gesture recognition is of higher interest for Human-Computer Interaction (HCI), due to its widespread applications in virtual reality, sign language recognition, robot control, medical industry and computer games. The main goal of the research is to propose a computationally efficient and accurate pattern recognition algorithm for HCI. Deep learning attempts to model high-level abstractions (features) in data and build strong feature space for the recognition task. Neural network with five hidden layers was used and each layer can learn features at a different level of abstraction. However, training neural networks with multiple hidden layers was difficult in practice. At first, each hidden layer individually was trained in an unsupervised fashion using autoencoders. After training the first autoencoder, second autoencoder was trained in a similar way. The main difference is that features that were generated from the first autoencoder are used as the training data in the second autoencoder thus decreased the size of the hidden representation, so that the second autoencoder learns an even smaller representation of the input data. The original vectors in the training data had 101376 dimensions. After passing them through the first encoder, this was reduced to 10000 dimensions. After using the second encoder, this was reduced to 1000 dimensions. Likewise at the end, final layer was trained to classify 50 dimensional vectors into different image classes. The result for the deep neural network is improved by performing Backpropagation on the whole multilayer network. Finally, we observed that average test classification error for traditional neural network with supervised learning algorithm is 3.6% while the error for pre-trained deep neural network is 1.4%. We can conclude that unsupervised pre-training adds robustness to a deep architecture and it proposes computationally efficient and accurate pattern recognition algorithms for HCI.Item Design & implementation of an efficient SMS server(Research Symposium 2009 - Faculty of Graduate Studies, University of Kelaniya, 2009) Dias, N.G.J.; Rathnasekara, P.L.A.UShort Message Service (SMS) is one of the most popular services provided by the telecommunication companies all over the world. Due to the low cost and efficiency of this service compared to the traditional ways of sending messages, companies now a days use this technology heavily to send business messages to their customers and employers. The main objective of this in this research is to implement a SMS server using open source software with minimum resources. Basically a SMS server consists of two main features. It can be used for sending messages and the other is it can be used for receiving messages and store them in a database. Apart from these two features the proposed server consists of many other features such as categorization of receiving messages according to the type, restricting number of messages sending for the administrator, prevent the user to login to the server in the administrator defined hours, create template messages, allow only to login to the server through authorized client machines be (IP address) and etc. In order to achieve a higher level of security, we have stored the encrypted password together with the usernames for validating the users‟ login to the server. These data is retrieved through SQL commands using „data decryption‟ methods. The main function of this server is sending and receiving messages using a GSM modem. The initial step was to configure the GSM modem to connect it to the server machine through a USB port. A connection should be established with the SIM card, since the functionality of the modem is handled completely by the SIM card. After a connection is established, SMS can be sent and received from the SIM card using the „AT‟ commands (Hayes commands) technology. Sending messages and receiving messages are stored in the outbox table and inbox table of the database respectively. The box messages are then classified according to the type. CSV file uploading technology was used to insert data to the database, since it is more convenient to the user. Using this method messages are stored in a queue table and then send one by one automatically in a user desired time. When sending a message, server checks whether the recipient number is restricted or in the correct format. This server was built on Apache Tomcat web server and the web pages are created using JSP technology. MySQL database server, JDK 1.5 and Rational Rose S/W were used in the development of the database. The server was built using only one modem; however, this can be developed to support several modems to increase the efficiency when sending messages for millions of customers using the Queue. However the server developed is efficient and can be used in any company or organization in a robust manner.Item Design and Implementation of a Web-Based Faculty Information System(University of Kelaniya, 2006) Kumara, K.H.; Munasinghe, L.; Jayasuriya, K.D.; Dias, N.G.J.; de Silva, C.H.; Kalingamudali, S.R.D.Although Information Systems (IS) are valuable elements for organizations, the private and public sectors in Sri Lanka are reluctant to use IS for decision making, organizing and classifying data, processing transactions, and for many other activities. This is caused by the lack of computer literacy and conventional attitudes of the majority of the Sri Lankan community. Even in the higher education institutions in Sri Lanka, majority of both staff and students who are well aware of information technology, rely on conventional ways of handling information. One major reason for the above issue is lack of availability of application software well suited for their needs. On one hand, such types of software are rarely used by institutes because of their high cost; on the other hand, they are highly organization dependent. Hence steps have been taken to build a Faculty Information System (FIS) for the Faculty of Science, University of Kelaniya. The FIS was developed in a network environment, with the active participation of all those involved by means of continuous dialogues with the aim of both promoting and demonstrating its benefits and by catering to the different needs arising from the faculty community. The FIS consists of three major subsystems, namely FIS Web Based Subsystem (FISW), FIS Intranet Sub System (FISI) and FIS Examination Sub System (FISE). FISW provides www access to FIS users at any time from anywhere. FISI enables the capability of access to FIS via the Faculty office local area network with security restrictions. FISE processes the examination data in a highly secured environment which is separated from both FISW and FISI. FISI and FISW eventually connect with FISE under security restrictions as required. It is clear that development of this type of tool has social, cultural and technological dimensions. What we planned is one thing, what happened in reality and how the stake holders respond to the tool is another. An evidence of the neediness of this type of tool to the faculty is the number of accesses, 41784, in two years. The above figure is not a complete measure of acceptance of FIS. To detect its defects and limitations, in addition it is necessary to take into account the number of pages requested by each registered user in the FIS. These statistics can be used to enhance the features of FIS.Item Designing an Automatic Speech Recognition System to recognize frequently used sentences in Sinhala(University of Kelaniya, 2013) Samankula, W.G.D.M.; Dias, N.G.J.There are millions of people with visual impairments as well as motor impairments caused by old age, sickness or accidents. These people have to face a lot of challenges in their day to day lives. Even at home, if these people want to do a simple task such as control the radio, refrigerator, or fan, it becomes a difficult task because they have to use a white cane or wheel chair to move or get assistance from others. The aim of this research is to develop a speaker independent continuous speech recognition system which is embedded with the capability of understanding human speech in Sinhala language rather than foreign languages because the majority of people in Sri Lanka speak Sinhalese. In order to achieve this goal, human speech signals have to be recognized and converted into effective commands to operate equipment. The Hidden Markov Model Toolkit (HTK) based on Hidden Markov Model (HMM), a statistical approach, is used to develop the system. HTK is used for data preparation, training, testing and analysis phases of the recognition process. Twenty five sentences consisting of 2, 3 or 4 words in Sinhala which are frequently used in day to day activities at home were prepared. Recording process has been done with 10 native speakers (5 females and 5 males) in a quiet environment. Eight hundred speech samples have been collected for training from 4 males and 4 females by speaking each sentence 4 times. The experimental results show 94.00% sentence level accuracy and a 97.85% word level accuracy using a mono-phone based acoustic model and, also a 99.00% sentence level accuracy and a 99.69% word level accuracy using a tri-phone based acoustic model.Item Designing and implementation of new computer software system for the Centre for Open and Distance Learning(Research Symposium 2009 - Faculty of Graduate Studies, University of Kelaniya, 2009) Dias, N.G.J.; Dolawattha, D.D.M.Nearly 150000 students were qualifying for university education in Sri Lanka annually. But only 18000 students are selected to follow different undergraduate courses in local universities where we have free education. Remaining students have to follow external degree programmes conducted by National universities, professional courses conducted by private sector institutes or Government institutes and few are going abroad for higher education. Large portion of students are registered annually at the University of Kelaniya among the students who follows external degree courses at different national universities. Nearly 85500 students were registered from 1993 to 2008 and 13716 students were graduated from them so far. We have identified that after the year 2005 more than 10000 students are registering annually. Five different degree courses are offered and 16 exams and 16 seminars need to be conducted for them annually by the CODL. We require more robust, powerful, user friendly and reliable Computer Software System (CSS) by considering rapidly growing students capacity and services rendered to them. On the other hand we require a CSS, because a new exam evaluation system (NEES) has been introduced from the student batch 2007. In that NEES offered course units with particular credit value and each student needs to be completed specified no of credits within a specified period of time relevant to the degree followed. CSS is a Management information System (MIS) type Multi-user Computer System working in a local network environment and password restricted users will be operated the system. Main functionalities will be student registration, conducting exams, printing admissions, printing transcripts and certificates and other required sub functionalities come under above. All functional requirements, non-functional requirements and domain requirements were identified. System was designed by integrating concurrency control and user authorization. The authorized users will only be the CODL Staff and categorize them according to their job assigned. (i.e. Student registration user, Examination data entry user etc.). User authorization subsystem considers different functionalities of the CSS and gives access to each user category by considering their job assigned. Limitations and constraints have to be considered when developing the CSS. It will not be connected to the Campus wide network and run in a separate server with a view to avoid internet hacking and reduce the internet virus risk. Examination results are being published on the CODL web, which runs in a separate server. Storing data in the database is unlimited and the database backup facility is an important feature. Potential usefulness of the CSS are the Maintainability and Modularity. An Integrated software process model was used to model the CSS between two software process models, Incremental development and Rapid application development. More user friendly and interactively interfaces will be developed in CSS. Designing the CSS is done using Rational Rose with object oriented software design techniques. It was developed on .Net framework using VB.Net as the front-end tool and SQL Server as the back-end tool.Item Detection of Vehicle License Plates Using Background Subtraction Method(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Ashan, M.K.B.; Dias, N.G.J.The detection of a vehicle license plate can be considered as a primary task of a License Plate Recognition System (LPRS). Detecting a vehicle, locating the license plate and the non-uniformity of license plates are few of the challenges when it comes to detection of a license plate. This paper proposes a work to ensure the detection of license plates which are being used in Sri Lanka. The work here, consists of a prototype which was developed using the Matlab’s predefined functions. The license plate detection process consists of two major phases. They are, detection of a vehicle from a video footage or from a real time video stream and license plate area isolation from the detected vehicle. By sending the isolated license plate image to an Optical Character Recognition (OCR) System, its contents can be recognized. The proposed detection process may depend on facts such as, the lighting and weather conditions, speed of the vehicle, efficiency in real time detection, non-uniformity effects of number plates, the video source device specifications and fitted angle of the camera. In the license plate detection process, the first phase, that is; the detection of a vehicle from a video source is accomplished by separating the input video source into frames and analysing these frames individually. A monitoring mask is applied at the beginning of the processing in order to define the road area and it helps the algorithm to look for vehicles in that selected area only. To identify the background, a foreground detection model is used, which is based on an adaptive Gaussian mixture model. Learning rate, threshold value to determine the background model and the number of Gaussian modes are the key parameters of the foreground detection model and they have to be configured according to the environment of the video. The background subtraction approach is used to determine the moving vehicles. In this approach, a reference frame is identified as the background from the previous step.By subtracting the current frame from that reference frame, the blobs which are considered to be vehicles are detected. A blob means a collection of pixels and the blob size should have to be configured according to facts such as the angle of the camera to the road and distance between camera and the monitoring area. Even though a vehicle is identified in the above steps, it needs a way to identify a vehicle uniquely to eliminate duplicates being processed in next layer. As the final step of the first layer, it will generate distinct numbers using the Kalman filter, for each and every vehicle which are detected from the previous steps. This distinct number will be an identifier for a particular vehicle, until it lefts the global window. In, the second phase of the license plate detection will initiate in order to isolate the license plate from the detected vehicle image. First, the input image is converted into grayscale to reduce the luminance of the colour image and then it will be dilated. Dilation is used to reduce the noise of an image, to fill any unnecessary holes in the image and to improve the boundaries of the objects by filling any broken lines in the image. Next, horizontal and vertical edge processing is carried out and histograms are drawn for both of these processing criteria. The histograms are used to detect the probable candidates where the license plate is located. The histogram values of edge processing can change drastically between consecutive columns and rows. These drastic changes are smoothed and then the unwanted regions are detected using the low histogram values. By removing these unwanted regions, the candidate regions which may consists of the license plate are identified. Since the license plate region is considered to be having few letters closely on a plain coloured background, the region with the maximum histogram value is considered as the most probable candidate for the license plate. In order to demonstrate the algorithm, a prototype was developed using MATLAB R2014a. Additional hardware plugins such as Image Acquisition Toolbox Support Package for OS Generic Video Interface, Computer vision system toolbox and Image Acquisition Toolbox were used for the development. When the prototype is being used for a certain video stream/file, first and foremost, the parameters of the foreground detector and the blob size has to be configured according to the environment. Then, the monitoring window and the hardware configurations can be done. The prototype which was developed using the algorithm discussed in this paper was tested using both video footages and static vehicle images. These data were first grouped considering facts such as non-uniformity of number plates, the fitted angle of the camera. Vehicle detection showed an efficiency around 85% and license plate locating efficiency was around 60%. Therefore, the algorithm showed an overall efficiency around 60%. The objective of this work is to develop an algorithm, which can detect vehicle license plates from a video source file/stream. Since the problem of detecting a vehicle license plates is crucial for some complex systems, the proposed algorithm would fill the gap.Item Development of a Linear-Model based Computer Software for Least Cost Poultry ration formulation(University of Kelaniya, 2008) Piyaratne, M.K.D.K.; Dias, N.G.J.; Attapattu, M.This study was based on the development of a user friendly, linear-model based computer software system for least cost poultry ration formulation. The software developed in this work used most recent advancements in the field of poultry nutrition and feeding, and developed to suit the local conditions. Sixty locally available feed ingredients were used and thirty nutrients which are most important to poultry growth were considered. Standard linear programming (LP) model for least cost ration formulation was used to analyze and determine the most efficient way of compounding the least cost ration. A mathematical model was constructed, taking into consideration nutrient composition of each of the available ingredient, costs and nutrient requirements of the birds2• Since the ideal protein (IP) concept is becoming popular as a mean of increasing the utilization efficiency of dietary proteins by poultry, NRC (N ational Research Council) and IICP (Ideal Illinois Chick Protein) ideal proteins were also included in broiler rations for calculations. Therefore, although the initial database was based on NRC recommendations users can freely customize ingredient levels and nutrient requirements as and when they required. Ration balancing can be done with 100% equal requirements up to 10-12 major nutrients based to least cost. The standard nutrient requirement levels can be customized and researchers can do experiments with different requirement levels. Therefore, this software can be a very useful tool for researchers, nutritionists as well as teachers. Amino acid profile selection feature allows researchers to formulate experimental rations with various amino acid levels and protein levels. The software can be run under Microsoft Windows environment and users are able to print and save results as well as initial database information. The software has been successfully installed, tested and evaluated successfully with several research projects.Item Dynamic Time Warping Based Speech Recognition for Isolated Sinhala Words(Research Symposium 2010 - Faculty of Graduate Studies, University of Kelaniya, 2010) Priyadarshani, P. G. N.; Dias, N.G.J.Communication between computer and the human is basically done through keyboard and screen-oriented systems. In the current Sri Lankan context, this restricts the usage of computers to a small fraction of the population, who are both computer literate and conversant with English. Accordingly, the major barrier between the computer and people in Sri Lanka is the language since English is not the mother tongue of most of the people and there is a large proportion of under educated people in rural areas of Sri Lanka. In order to enable a wider proportion of population to benefit from Information Technology, there is a dire need for an interface other than keyboard and screen interface that is widely used at present. The best solution is an efficient speech recognizer so that a natural human-machine interface could be developed to replace the traditional interfaces, such as keyboard and mouse of the computer. Further speech technologies guarantee to be the next generation user interface. For many languages speech recognition applications as well as text to speech synthesis applications have been developed and they have achieved a considerably high precision and applied them in real world applications successfully in developed countries. Even though currently there is no proper speech recognition approach for Sinhala language and the researches in this field in Sri Lanka is still in an infant stage. Here we investigated the fitness of the dynamic programming technique called Dynamic Time Warping (DTW) algorithm in conjunction with the Mel Frequency Cepstral Coefficients (MFCC) to identify separately pronounced Sinhala words. One of the major difficulties in speech recognition is that although different recordings of the same words includes more or less the same sounds in the same order, the durations of each sub word within the word do not match. Consequently, when recognizing words by matching them with reference templates it gives inaccurate results if there is no temporal alignment. DTW solves this problem by accommodating differences in timing between test words and reference templates. Converting the sound waves into a parametric representation is a major part of any speech recognition approach and here we have used MFCCs along with their first and second derivatives in time as the feature vector because they have been shown good performance in both speech recognition a well as in speaker recognition than other conventional speech features, In addition the derivatives reflect better dynamic changes of human voice over time. For extracting the features we divide speech signal into equally spaced frames and compute one set of features per frame as the speech signals are not stationary. We developed the referencetemplates for each word from one example of that particular word per speaker and matched the test speech against to those reference patterns using DTW approach rather than other methods such as Vector Quantization and Euclidean distance because DTW can successfully deal with test signal and reference templates of the same word having different durations. The local distance measure is the distance between features at a pair of frames while the global distance from beginning of utterance until last pair of frames reflects the similarity between two vectors. Based on that, we could recognize the words that we input from our selected vocabulary. In most of the systems developed based on DTW for other languages have been used very limited vocabulary for instance ten words but in this work we have used a considerably large vocabulary of 600 words. We obtained the recordings and separated each utterance and made an audio file for each using the software Praat. We developed the program in MATLAB 7.0. For our experiment we used two informants whose native language is Sinhala since we followed speaker dependent approach and tested each speaker separately, it displayed 80.33% overall accuracy.Item ER to Relational Model Mapping: Information Preserved Generalized Approach(International Postgraduate Research Conference 2019, Faculty of Graduate Studies, University of Kelaniya, Sri Lanka, 2019) Pieris, D.; Wijegunasekera, M.C.; Dias, N.G.J.The Entity-Relationship (ER) model is widely used to create ER schemas to represent application domains in the Information Systems development field. However, when an ER schema is transformed to a Relational Database Schema (RDS), some critical information modeled on the ER schema may not be represented meaningfully on the RDS. This situation causes a loss of information during the transformation process. In our previous studies, we showed that the deficiencies that exist both in the ER model and the transformation algorithm cause this situation. Accordingly, we modified the ER model and the transformation algorithm to eliminate the deficiencies and thereby to preserve the information in the transformation process. We then showed that a mapping that is one-to-one and onto exists from the ER Schema to the RDS, and the information represented on the ER schema is preserved on the RDS. For this purpose, the ER schema should be created using the modified ER model and transformed to the RDS by the modified transformation algorithm. However, this concept has not yet been proved formally. It needs to be testified for any ER schema representing any application domain. Subsequently, following the modified ER model, we also proposed a generic ER schema ̶ an ER schema that represents any real-world phenomena in symbolic notation ̶ for using in a future proof creation process. Thus, in this work, we aim to create a formal proof for validating the work that we had done. For simplicity, we use a generic ER schema that consists of two regular (strong) entity types and a one-to-many-relationship-type. We first show that the generic ER schema can be partitioned into unique segments. We call them ER-construct-units, where each one represents a unique semantic meaning in the real world. The ER schema can be viewed to have been made up of the set of ER-construct-units. Both the ER schema and the ER-construct-unit set are equivalent. Second, we transform the generic ER schema to its corresponding RDS using the modified transformation algorithm. We then show that the RDS can also be partitioned into unique segments, which we call Relation-schema-units. Next, we show that a mapping that is one-to-one and onto exists from the set of ER-construct-units to the set of Relation-schema-units. In conclusion, we show that any ER-construct-unit in the ER schema has its own and unique Relation-schema-unit on the RDS. Therefore, any piece of information represented on the ER schema has its own and unique representation on the RDS. The proof can be expanded to any generic ER schema that is even bigger than the current one, and accordingly, the same result can be obtained. Since the generic ER schema means a generalized representation of any real-world ER schema, we conclude that information represented on any ER schema is preserved on its corresponding RDS.
- «
- 1 (current)
- 2
- 3
- »