SlideShare a Scribd company logo
International Journal of Research in Engineering and Science (IJRES)
ISSN (Online): 2320-9364, ISSN (Print): 2320-9356
www.ijres.org Volume 1 Issue 3 ǁ July. 2013 ǁ PP.16-26
www.ijres.org 16 | Page
Web Services Based Integration Tool for Heterogeneous
Databases
Amin Noaman, Fathy Essia, Mostafa Salah
Faculty of Computing & Information Technology, King Abdulaziz University
Abstract: In this paper we introduce an integration system that consists of two subsystems (tools): integration
sub-system (tool) and query (sub-system) tool. The integration tool has been built for integrating data from
different data stores (databases) that were created with different database engines. The query sub-system (tool)
has been built to help a user to query in a structured natural language or structured query language. The
integration system has been built based on the web services technology to be adaptable, reusable, maintainable,
and distributed. The integration subsystem collects data from heterogeneous data sources, unifies them based on
ontology and stores the unified data in a data warehousing, which its schema is generated automatically by the
tool. The integration tool is a database engine independent, domain independent and based on ontology scheme.
The query tool has been built to accept the requests from a user and manipulate data in the data warehouse and
return the results to the user. The query tool generates queries automatically based on the user requirements and
data warehouse schema. The user can write his query as structured natural language or structured query
language. The system has been implemented and tested.
I. Introduction
The Web contains abundant repositories of information that make selecting just the needed information
for an application a great challenge since computers applications understand only the Web pages structure and
layout and have no access to their intended meaning. To enable users get information from the Web by querying
a database there are two traditional approaches: to enhance query languages to be a Web aware; the other is
virtually extraction Web pages with wrappers. The new alternative approach proposed by Embley [1] at
Brigham Young University, data extraction group is the Semantic Web.
The Semantic Web aims to enhance the existing Web with a layer of machine-interpretable metadata.
The American Heritage Dictionary defines semantics as ―the meaning or the interpretation of a word, sentence,
or other language form‖ Embley [1].
The emergence of the Semantic Web will simplify and improve knowledge reuse on the Web and will
change the way people can access knowledge, agents will be a knowledge primary consumer. By combining
knowledge about their user and his needs with information collected from the Semantic Web, agents can
perform tasks via Web services [2] automatically. So agents can understand and reason about information and
use it to meet user’s needs. They can provide assistance using ontologies, axioms, and languages such as
DARPA Agent Markup Language which are cornerstones of the Semantic Web.
Data interoperability occurs when an application can use data from one or more disparate data sources.
With the amount of data being produced, stored, and exchanged in the world today, there are numerous
situations for which achieving data interoperability is essential. For example: Multiple organizations with their
own data storage schemas, such as regional educational services, might merge into one, larger organization and
consolidate their data. Also, a head office may require its various organizations to submit annual performance
data in a particular format; this format may change from year to year. Two separate organizations having data
about a certain topic may wish to exchange or merge this data; however, they do not want to share private data
about their employees and finances. Finally, a supplier may wish to exchange data with a manufacturer.
The common issue in these examples is that the data to be exchanged and/or integrated comes from
separate sources that were developed independently. This means that the data might reside in completely
different formats - for example, some data might be stored in a relational database, the other as XML files, even
textual sources can provide data.
In addition, because each data schema is designed independently, these schemas will be different - even
if they are expressed in the same data model (e.g. the relational data model) and describe the same domain.
In data integration, a mediated schema is used to provide a uniform query interface for multiple data sources.
The mediated schema approach is often used in enterprise data integration, for example when various branches
of the same organization merge. In this approach, the data stays in the individual source databases. Queries are
expressed in terms of the mediated schema, while wrappers containing schema mappings between the source
schemas and the mediated schema translate the queries and the results back and forth.
Web Services Based Integration Tool for Heterogeneous Databases
www.ijres.org 17 | Page
Other approaches, often used in Web applications, are: peer-to-peer data integration where pairwise
mappings are made directly between a number of individual data sources, and the data exchange where mapping
is created between a source and a target schema with the goal of moving all of the data from the source database
to the target database.
Schema mappings represent a key to achieve data interoperability. It is a precise specification of the
relationships between the elements of a source schema and the elements of a target schema. This specification
makes it possible to transform data from the source schema to fit into the target schema. Executable schema
mappings are schema mappings that can take an instance of a source schema and reform it to meet the syntax
and integrity constraints of a target schema. The source and target schemas need not be in the same format; for
instance, the source database might be a relational database while the target database could be stored in XML.
Executable schema mappings can be expressed in any executable language that can be used to extract data from
or input data into the databases, such as SQL, XQuery.
The schema matching involves finding correspondences between pairs of individual elements of the
source and target schemas. Taking as input a source schema S and a target schema T, this step outputs a
multimapping which consists of pairs of correspondences between elements of S and elements of T. (Where
elements, in a relational database, are the attributes of relations). The methods used for this step use clues from
the labels of the schema attributes [3], the structures of the schemas [4], and occasionally lexical comparisons to
words present in external taxonomies of words [5]. The most effective schema matchers LSD [6] use a hybrid
of these techniques. Even the best schema matchers do not achieve 100% accuracy – for example, Doan et al.
[6] reported 71% - 92% accuracy for their hybrid matcher, LSD, and noted that two specific characteristics of
schemas preventing the accuracy from being higher were: ambiguity in the meaning of labels, and being unable
to anticipate every type of format for the data. These deficiencies in accuracy are propagated to the next step in
schema mapping creation, mapping generation.
II. Related Work
Brend Amann et al. [7], proposed ontology mediator architecture for the querying and integration of
XML data sources. Cruz et al. [8] proposed mediator to providing data interoperability among different
databases. Also, Philipi et al. [9] introduced architecture for ontology-driven data integration based on XML
technology.
Others presented some solutions to enhance the metadata representation as in Hunter et al. [10] by
combining RDF and XML schemas, and Ngmnij et al. [11] by using metadata dictionary as for solving some
semantic heterogeneity.
In solving some problems in the query processing, Baoshi et al. [12] presented a query translation
approach, Corby et al. [13] addressed the problem of a dedicated ontology-based query language, Saleh [14]
presented a semantic framework that addresses the query mapping approach.
In E. Mena et al. [15] OBSERVER is an approach for query processing in global information system.
Yingge et al. [16] presented aSDMS system which utilizes software agent and Semantic Web technologies; they
addressed the problem of improving the efficiency of information management across weak data. A data
warehousing approach with ontology based query facility presented by Munir et al. [17].
Finaly, Al-Ghamdi, et al, [18], developed a software system based on ontology to semantically integrate
heterogeneous data sources such as XML and RDF to solve some conflicts that occur in these sources. They
used agent framework based on ontology to retrieve data from distributed heterogeneous data sources. They
implemented this framework using some modules and libraries of Java, Aglet, Jena and AltovaXML.
III. The Integration System
High-level architecture
Figure 1 illustrates the high-level architecture of the integration system ( tools) . The figure shows that
the integration system has two tools (susb-systems): query Tool and integration tool. The integration tool reads
the schema of each data store and the ontology-2 information and builds data warehouse schema (structure) for
those data stores. The query tool receives a structural natural language query and analyzes it based on the
ontology-1 information, and builds a SQL query to retrieve the required data from the datawarehouse.
Web Services Based Integration Tool for Heterogeneous Databases
www.ijres.org 18 | Page
Fig. 1: High level architecture of the system
Query Tool
Integration Tool
Datawarehouse
Data Store #1 Data Store #n
Data to be stored in
the Datawarehouse
Ontology-2
Ontology-1
Web Services Based Integration Tool for Heterogeneous Databases
www.ijres.org 19 | Page
Integration sub-system (tool)
A user
Figure 2: The integrator sub-system
The Architecture of the Integration System
The system consists of two sub-systems: integration sub-system, and query sub-system as shown in
figure 2. The integration-subsystem has a set of web services: retrieving web services, Gathering web service,
datawarehouse schema-web-service-generator, and data integrator web service. In addition the integration sub-
system contains ontology-2 that includes domain data dictionary.
In the above architecture, the retrieving web-service-1 until retrieving web-service-n retrieve the
updated or new data from the Data Source-1 unit Data Source-n and return the data to Gathering web service.
Each retrieving web service checks off line its corresponding data source to retrieve the updated and new data.
The gathering web service receives the retrieved data from all resources and store them in Temporally-Big-data-
sources. This Big-data-resources has all tables of all data sources but all them are stored in the same format of a
database engine. This means that gathering web service convert the retrieved tables from different formats to the
format of Temporally-Big-data-sources. The ontology-2 holds the data dictionary of the application domain. The
data dictionary are stored and updated by the business analyst.
Datawarehouse schema-Web-service-generator reads data dictionary from ontology-2 and structure of
all tables in Temporally-Big-data-sources and produce the structure of the data warehouse and the mapping
metadata of the current application that are stored in the metadata mapping table. Data integrator web service
integrates the data that exist in Temporally-Big-data-sources based on metadata mapping table and stores the
integrated data in the datawarehouse.
Data Source-1 Data Source-n
Retrieving Web-service-1 Retrieving Web-service-n
Gathering-Web service
Temporally-Big-data sources
Datawarehouse
schema-Web-service-
generator
Ontology -2
(Datadictionary of
a domain)
Metadata
Mapping table
Data
integrator
web
service
Web Services Based Integration Tool for Heterogeneous Databases
www.ijres.org 20 | Page
The query sub-system contains a set of web services in addition to ontology-1. The web services are
user interface web service, Query generator-web service, and Dataware-house web service. The user Interface
Web service creates a user interface to the user where the user enters his query in formatted (has a syntax)
natural language.
Query generator-web service receives the formatted query from the interface web services and based on
the triples that are stored in the ontology-1 creates SQL-statement.
Data warehouse web service receives the created SQL-statement and retrieves the required data. The retrieved
data is returned to the user interface web service to be displayed to the user.
The data warehouse is shared between the two sub-system and it is built automatically based on
specific database engine such as SQL_server or DB2 or others.
Figure 3 shows a sequence diagram for the query subsystem. The diagram illustrates the dynamic behavior of
the subsystem. In the sequence diagram, the user writes the query in natural language that accepted by the
method write-snl-query() that has been implemented in user interface web service.
The Generate-SQL-s (snl) message is sent by the user interface web service and received by Query generator
web service that it generates SQL (structure query language statement) based on ontology-1.
The generated SQL-s is sent with the Execute(SQL-s) message that is accepted by data warehouse web service.
The data warehouse web service executes the statement by retrieving data from the data warehouse. The
retrieved data is sent as actual argument of the message display-results(r-data) that is received by the user
interface web service to be displayed. The sequence diagram of the integration subsystem is shown in figure 4a.
Fig. 3: Sequence diagram of query tool
User interface web
service
Query generator web
service
Datawarehouse
web service
Write-snl-query()
Generate-SQL-s(snl)
Execute(SQL-s)
Display-results(r-data)
Web Services Based Integration Tool for Heterogeneous Databases
www.ijres.org 21 | Page
Fig. 4a: Sequence diagram of the integration tool
In sequence diagram(figure 4a), the retrieving web service checks a timer. If the timer value equals
zero, then the ― retrieveNewData()‖ method is called to retrieve all new data in data sources. The
―storeRetrieveData(data)‖ method of gathering web service receives the retrieved data to store them in the
temporally-big-data sources. The data integrator web service receives ―integrateDataAndStoreInDw()‖ to
integrate the gathered data based on metadata mapping table and store them in datawarehouse.
Figure 4b illustrates the sequence diagram of building the schema of datawarehouse. In the diagram the
―Datawarehouse schema-Web-service- generator‖ receives the message ―buildDwSchema()‖ message to build a
new datawarehouse. The ―Datawarehouse schema-Web-service- generator‖ sends the
―collectSchemasOfDataStores()‖ message to ―gathering web service‖ to retrieve all schemas of all data stores.
Each ―retrieving web service‖ receives the ―retrieveDataStoreSchema()‖ message from the ―gathering web
service‖ to return the schema of its linked data store. Schemas of all data stores are returned to ―Datawarehouse
schema-Web-service- generator‖ to read the data from ontology2 and finally creates a new datawrehouse.
Retrieving web
service
Gathering web
service
Data integrator
web service
timerCheck()
[timer=0]
retrieveNewData()
storeRetrievedData(data)
integrateDataAndStoreInDw()
Web Services Based Integration Tool for Heterogeneous Databases
www.ijres.org 22 | Page
Fig. 4b: Sequence diagram of building datawarehouse schema
IV. Implementation
A prototype has been implemented using Java, and ASP.Net environment. Sun Microsystems provide
Java Development Kits (JSDK) for many platforms, a standard edition and enterprise edition. The used data
sources in this prototype are XML, and RDF as they are dominant in data interchange.
The XML data source has the following schema structure Figure 5:
a. XML schema structure
XML
Isbn
Title
Auther
Publisher
Date
Version
Datawarehouse schema-Web-
service- generator
Gathering web
service
Retrieving web
service
buildDwSchema()
collectSchemasOfDat
aStores()
retrieveDataStoresSchema()
Web Services Based Integration Tool for Heterogeneous Databases
www.ijres.org 23 | Page
b- XML data sample
Fig 5. XML data sources
The RDF data source has the following schema structure Figure 6:
a. RDF schema structure
b- RDF data sample
Fig 6. RDF data sources
RDF
Creator
Title
Identifier
Publisher
Date
Web Services Based Integration Tool for Heterogeneous Databases
www.ijres.org 24 | Page
After that, a unified data source is generated from all the input different sources
a. The unified Schema structure
b- Unified data sample
Fig 7. Unified data source
Then, we can request information from the unified data source based in certain information item.
For example, we can query book information based on it ISBN as in Figure 8.
Unified Data Source
isbn
title
auther
publisher
date
version
Source
Web Services Based Integration Tool for Heterogeneous Databases
www.ijres.org 25 | Page
Fig. 8 : Querying the unified data source based on book ISBN
V. Conclusion
In this research we have built an integration system that collects data from different data
sources that were generated by different database engines. Also, the system helps users to request
data using structured natural language or structured query language. The system consists of two
subsystems (tools): integration sub-system (tool) and query (sub-system) tool; the integration system
has been built based on the web services technology.
The integration tool has been built as a multi web services for integrating data from different data
stores (databases) that were created with different database engines; there is a retrieving web-service
for each engine. The integration sub-system (tool) creates the schema of the data warehouse
automatically based on domain ontology that is associated with the tool. This means that the time of
building data warehouse is reduced and the performance of the application totally is increased.
The query sub-system (tool) has been built to help a user to query in a structured natural language or
structured query language. The query sub-system uses local ontology that is associated with the
system to understand the structured natural language query and concerted into SQL to retrieve the
results from the data warehouse.
The system has been implemented, tested. The system has many advantages: a database engine
independent, domain independent, and smart system because it is based on ontology scheme.
Our system also has good attributes: adaptable, reusable, and distribution. The system is adaptable
because it can be used with any distributed system (platform or distributed system independent). It is reusable
because its web services can be used in building similar tools without recompilation. The system is distributed
means that its web services can be deployed on different machines in different locations. The system satisfy two
non-functional requirements: scalability and performance. Scalability means that the system can serve any
number of users without scarifying the performance. This is because the web services can be deployed on
another machines to reduce the load on the existing machines.
References
[1] David W. Embley. ―Toward Semantic Understanding—an Approach Based on Information Extraction
Ontologies―. In proceedings of the Fifteenth Australasian Database Conference (ADC’04), USA 2004.
[2] Andreas Heß, Nicholas Kushmerick. ―Learning to Attach Semantic Metadata to Web Services‖.
International Semantic Web Conference 2003.
Web Services Based Integration Tool for Heterogeneous Databases
www.ijres.org 26 | Page
[3] W.W. Cohen, P. Ravikumar, and S.E. Fienberg. A comparison of string distance metrics for name
matching tasks. Proceedings of the IJCAI-2003 Workshop on Information Integration on the Web
(IIWeb-03), 2003.
[4] S. Melnik, H. Garcia-Molina, and E. Rahm. Similarity flooding: a versatile graph matching algorithm
and its application to schema matching. Proceedings 18th International Conference on Data
Engineering, pages 117–128, 2002.
[5] T. Pedersen, S. Patwardhan, and J. Michelizzi. Wordnet::Similarity - Measuring the relatedness of
concepts. Proceedings of the National Conference on Artificial Intelligence, 19:1024–1025, 2004.
[6] A. Doan, J. Madhavan, P. Domingos, and A. Halevy. Ontology matching: A machine learning
approach, pages 385–516. Springer Verlag, Berlin, Heiderlberg, New York, 2003.
[7] Brend Amann, Catriel Beeri, Irini Fundulaki, Michel Scholl. ― Querying XML Sources Using an
Ontology-Based Mediator‖. In On the Move to Meaningful Internet Systems, Confederated
International Conference DOA, CoopIS and ODBASE, pages 429-448, Springer-Verlag, 2002.
[8] Isabel Cruz, Huiyong Xiao, Feihong Hsu. ―An Ontology-Based Framework for XML Semantic
Integration‖. In 8th International Database Engineering and Applications Symposium (IDEAS 2004).
[9] Stephan Philipi, Jacob Kohler. ―Using XML Technology for the Ontology-Based Semantic Integration
of Life Science Databases‖. IEEE Transactions on Information Technology in Biomedicine, vol. 8 no.
2. June 2004.
[10] Jane Hunter, Carl Lagoze. ―Combining RDF and XML Schemas to Enhance Interoperability Between
Metadata Application Profiles‖. Copyright is held by the authors/owner(s). ACM. May 2001.
[11] Ngmnij Arch-int, Peraphon Sophatsathit, Yuefeng Li. ―Ontology-Based Metadata Dicctionary for
Integration Heterogeneous Information Sources on the WWW‖. Australian Computer Society Inc..
2003.
[12] Baoshi Yan, Robert MacGregor. ―Translating Naive User Queries on the Semantic Web‖. Proceedings
in Semantic Integration Workshop, ISWC 2003.
[13] Olivier Corby, Rose Dieng-Kuntz, Fabien Gandon. ―Approximate Query Processing Based on
Ontologies‖. IEEE Intelligent Systems, IEEE 2006.
[14] Mostafa Saleh.‖ Semantic Query in Heterogeneous Web Data Sources‖. International Journal of
Computers and their Applications,USA, March 2008.
[15] E. Mena, V. Kashyap, A. Sheth, A. Illarramendi. ― OBSERVER: An Approach for Query Processing in
Global Information Systems Based on Interoperation Across Pre-existing Ontologies‖. In International
Journal on Distributed and Parallel Databases (DAPD), ISSN 0926-8782, v.8 n.2, April 2000.
[16] Yingge A. Wang, Elhadi Shakshuki. ―An Agent-based Semantic Web Department Content
Management System‖. ITHET 6th Annual International Conference. 2005 IEEE.
[17 K. Munir, M. Odeh, R. McClatchey, S. Khan, I. Habib. ―Semantic Information Retrieval from
Distributed Heterogeneous Data Sources‖. CCS Research Centre, University of West of England,
2007.
[18] N.Al-Ghamdi, M. Saleh, and F. Eassa, "Ontology-Based Query in Heterogeneous & Distributed Data
Sources", International Journal of Electrical & Computer Sciences IJECS-IJENS Vol: 10 No: 06, 2010

More Related Content

PDF
An approach for transforming of relational databases to owl ontology
PDF
Catalog-based Conversion from Relational Database into XML Schema (XSD)
PPTX
Data models
PPT
Week 1 Before the Advent of Database Systems & Fundamental Concepts
PDF
Whitepaper sones GraphDB (eng)
PDF
International Journal of Computational Engineering Research(IJCER)
PDF
Cse ii ii sem
PPT
Database model BY ME
An approach for transforming of relational databases to owl ontology
Catalog-based Conversion from Relational Database into XML Schema (XSD)
Data models
Week 1 Before the Advent of Database Systems & Fundamental Concepts
Whitepaper sones GraphDB (eng)
International Journal of Computational Engineering Research(IJCER)
Cse ii ii sem
Database model BY ME

What's hot (16)

PPTX
Database model
PDF
call for paper 2012, hard copy of journal, research paper publishing, where t...
PPTX
Jarrar: Data Schema Integration
PDF
International Journal of Engineering Research and Development (IJERD)
DOCX
Master of Computer Application (MCA) – Semester 4 MC0077
PDF
Comparative Study on Graph-based Information Retrieval: the Case of XML Document
PPT
Different data models
PDF
A SEMANTIC BASED APPROACH FOR KNOWLEDGE DISCOVERY AND ACQUISITION FROM MULTIP...
PPTX
Introduction to databases
PPTX
Database fundamentals
PPTX
Week 1 Lab Directions
PDF
Dn31766773
PPTX
Data models
PDF
ENHANCING KEYWORD SEARCH OVER RELATIONAL DATABASES USING ONTOLOGIES
DOCX
Data models
PDF
Data Convergence White Paper
Database model
call for paper 2012, hard copy of journal, research paper publishing, where t...
Jarrar: Data Schema Integration
International Journal of Engineering Research and Development (IJERD)
Master of Computer Application (MCA) – Semester 4 MC0077
Comparative Study on Graph-based Information Retrieval: the Case of XML Document
Different data models
A SEMANTIC BASED APPROACH FOR KNOWLEDGE DISCOVERY AND ACQUISITION FROM MULTIP...
Introduction to databases
Database fundamentals
Week 1 Lab Directions
Dn31766773
Data models
ENHANCING KEYWORD SEARCH OVER RELATIONAL DATABASES USING ONTOLOGIES
Data models
Data Convergence White Paper
Ad

Viewers also liked (18)

PDF
A0110104
PPT
Htvh nhat ban-huynhhoangvien
PDF
C0151823
PDF
F124144
PPT
Htvh uc huynh_hoangvien
PDF
PSoC BASED SPEECH RECOGNITION SYSTEM
PDF
B120512
PDF
Munish - Resume
PPTX
Tara Briscoe - Resume
PDF
B0151017
DOC
Lks radhit 3 feb 2015
PPTX
Problemas ciudadanos y sus soluciones tecnológicas
PDF
Varying Load Voltage Magnitude Impacts on Fault Level Constrained Optimal Pow...
PPTX
Automatizacion de procesoso de negocios
PDF
Design Of 64-Bit Parallel Prefix VLSI Adder For High Speed Arithmetic Circuits
PDF
Natural gas engine combustion research based on bench test
PPTX
10 inventos tecnológicos
PDF
H0114857
A0110104
Htvh nhat ban-huynhhoangvien
C0151823
F124144
Htvh uc huynh_hoangvien
PSoC BASED SPEECH RECOGNITION SYSTEM
B120512
Munish - Resume
Tara Briscoe - Resume
B0151017
Lks radhit 3 feb 2015
Problemas ciudadanos y sus soluciones tecnológicas
Varying Load Voltage Magnitude Impacts on Fault Level Constrained Optimal Pow...
Automatizacion de procesoso de negocios
Design Of 64-Bit Parallel Prefix VLSI Adder For High Speed Arithmetic Circuits
Natural gas engine combustion research based on bench test
10 inventos tecnológicos
H0114857
Ad

Similar to B131626 (20)

PDF
Xml based data exchange in the
PDF
Ultra large scale systems to design interoperability
PDF
A semantic based approach for knowledge discovery and acquistion from multipl...
PDF
A SEMANTIC BASED APPROACH FOR KNOWLEDGE DISCOVERY AND ACQUISITION FROM MULTIP...
PDF
A SEMANTIC BASED APPROACH FOR KNOWLEDGE DISCOVERY AND ACQUISITION FROM MULTIP...
PDF
Generic Algorithm based Data Retrieval Technique in Data Mining
PDF
Automatically converting tabular data to
DOC
Introduction abstract
PDF
Annotating Search Results from Web Databases
PDF
Data Integration in Multi-sources Information Systems
PDF
Orchestrating Multi-Agent Systems for Multi-Source Information Retrieval and ...
PDF
Orchestrating Multi-Agent Systems for Multi-Source Information Retrieval and ...
PDF
Orchestrating Multi-Agent Systems for Multi-Source Information Retrieval and ...
PDF
E0322035037
PDF
Study on Theoretical Aspects of Virtual Data Integration and its Applications
PDF
Study on Theoretical Aspects of Virtual Data Integration and its Applications
PDF
An Incremental Method For Meaning Elicitation Of A Domain Ontology
PDF
Paper id 25201463
PDF
P036401020107
PDF
Volume 2-issue-6-2016-2020
Xml based data exchange in the
Ultra large scale systems to design interoperability
A semantic based approach for knowledge discovery and acquistion from multipl...
A SEMANTIC BASED APPROACH FOR KNOWLEDGE DISCOVERY AND ACQUISITION FROM MULTIP...
A SEMANTIC BASED APPROACH FOR KNOWLEDGE DISCOVERY AND ACQUISITION FROM MULTIP...
Generic Algorithm based Data Retrieval Technique in Data Mining
Automatically converting tabular data to
Introduction abstract
Annotating Search Results from Web Databases
Data Integration in Multi-sources Information Systems
Orchestrating Multi-Agent Systems for Multi-Source Information Retrieval and ...
Orchestrating Multi-Agent Systems for Multi-Source Information Retrieval and ...
Orchestrating Multi-Agent Systems for Multi-Source Information Retrieval and ...
E0322035037
Study on Theoretical Aspects of Virtual Data Integration and its Applications
Study on Theoretical Aspects of Virtual Data Integration and its Applications
An Incremental Method For Meaning Elicitation Of A Domain Ontology
Paper id 25201463
P036401020107
Volume 2-issue-6-2016-2020

More from IJRES Journal (20)

PDF
Exploratory study on the use of crushed cockle shell as partial sand replacem...
PDF
Congenital Malaria: Correlation of Umbilical Cord Plasmodium falciparum Paras...
PDF
Review: Nonlinear Techniques for Analysis of Heart Rate Variability
PDF
Dynamic Modeling for Gas Phase Propylene Copolymerization in a Fluidized Bed ...
PDF
Study and evaluation for different types of Sudanese crude oil properties
PDF
A Short Report on Different Wavelets and Their Structures
PDF
A Case Study on Academic Services Application Using Agile Methodology for Mob...
PDF
Wear Analysis on Cylindrical Cam with Flexible Rod
PDF
DDOS Attacks-A Stealthy Way of Implementation and Detection
PDF
An improved fading Kalman filter in the application of BDS dynamic positioning
PDF
Positioning Error Analysis and Compensation of Differential Precision Workbench
PDF
Status of Heavy metal pollution in Mithi river: Then and Now
PDF
The Low-Temperature Radiant Floor Heating System Design and Experimental Stud...
PDF
Experimental study on critical closing pressure of mudstone fractured reservoirs
PDF
Correlation Analysis of Tool Wear and Cutting Sound Signal
PDF
Reduce Resources for Privacy in Mobile Cloud Computing Using Blowfish and DSA...
PDF
Resistance of Dryland Rice to Stem Borer (Scirpophaga incertulas Wlk.) Using ...
PDF
A novel high-precision curvature-compensated CMOS bandgap reference without u...
PDF
Structural aspect on carbon dioxide capture in nanotubes
PDF
Thesummaryabout fuzzy control parameters selected based on brake driver inten...
Exploratory study on the use of crushed cockle shell as partial sand replacem...
Congenital Malaria: Correlation of Umbilical Cord Plasmodium falciparum Paras...
Review: Nonlinear Techniques for Analysis of Heart Rate Variability
Dynamic Modeling for Gas Phase Propylene Copolymerization in a Fluidized Bed ...
Study and evaluation for different types of Sudanese crude oil properties
A Short Report on Different Wavelets and Their Structures
A Case Study on Academic Services Application Using Agile Methodology for Mob...
Wear Analysis on Cylindrical Cam with Flexible Rod
DDOS Attacks-A Stealthy Way of Implementation and Detection
An improved fading Kalman filter in the application of BDS dynamic positioning
Positioning Error Analysis and Compensation of Differential Precision Workbench
Status of Heavy metal pollution in Mithi river: Then and Now
The Low-Temperature Radiant Floor Heating System Design and Experimental Stud...
Experimental study on critical closing pressure of mudstone fractured reservoirs
Correlation Analysis of Tool Wear and Cutting Sound Signal
Reduce Resources for Privacy in Mobile Cloud Computing Using Blowfish and DSA...
Resistance of Dryland Rice to Stem Borer (Scirpophaga incertulas Wlk.) Using ...
A novel high-precision curvature-compensated CMOS bandgap reference without u...
Structural aspect on carbon dioxide capture in nanotubes
Thesummaryabout fuzzy control parameters selected based on brake driver inten...

Recently uploaded (20)

PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Modernizing your data center with Dell and AMD
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PPT
Teaching material agriculture food technology
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
Approach and Philosophy of On baking technology
PDF
Machine learning based COVID-19 study performance prediction
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Review of recent advances in non-invasive hemoglobin estimation
Diabetes mellitus diagnosis method based random forest with bat algorithm
The Rise and Fall of 3GPP – Time for a Sabbatical?
Modernizing your data center with Dell and AMD
Digital-Transformation-Roadmap-for-Companies.pptx
Mobile App Security Testing_ A Comprehensive Guide.pdf
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Spectral efficient network and resource selection model in 5G networks
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Reach Out and Touch Someone: Haptics and Empathic Computing
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Teaching material agriculture food technology
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
Approach and Philosophy of On baking technology
Machine learning based COVID-19 study performance prediction
“AI and Expert System Decision Support & Business Intelligence Systems”
Unlocking AI with Model Context Protocol (MCP)
Advanced methodologies resolving dimensionality complications for autism neur...
Review of recent advances in non-invasive hemoglobin estimation

B131626

  • 1. International Journal of Research in Engineering and Science (IJRES) ISSN (Online): 2320-9364, ISSN (Print): 2320-9356 www.ijres.org Volume 1 Issue 3 ǁ July. 2013 ǁ PP.16-26 www.ijres.org 16 | Page Web Services Based Integration Tool for Heterogeneous Databases Amin Noaman, Fathy Essia, Mostafa Salah Faculty of Computing & Information Technology, King Abdulaziz University Abstract: In this paper we introduce an integration system that consists of two subsystems (tools): integration sub-system (tool) and query (sub-system) tool. The integration tool has been built for integrating data from different data stores (databases) that were created with different database engines. The query sub-system (tool) has been built to help a user to query in a structured natural language or structured query language. The integration system has been built based on the web services technology to be adaptable, reusable, maintainable, and distributed. The integration subsystem collects data from heterogeneous data sources, unifies them based on ontology and stores the unified data in a data warehousing, which its schema is generated automatically by the tool. The integration tool is a database engine independent, domain independent and based on ontology scheme. The query tool has been built to accept the requests from a user and manipulate data in the data warehouse and return the results to the user. The query tool generates queries automatically based on the user requirements and data warehouse schema. The user can write his query as structured natural language or structured query language. The system has been implemented and tested. I. Introduction The Web contains abundant repositories of information that make selecting just the needed information for an application a great challenge since computers applications understand only the Web pages structure and layout and have no access to their intended meaning. To enable users get information from the Web by querying a database there are two traditional approaches: to enhance query languages to be a Web aware; the other is virtually extraction Web pages with wrappers. The new alternative approach proposed by Embley [1] at Brigham Young University, data extraction group is the Semantic Web. The Semantic Web aims to enhance the existing Web with a layer of machine-interpretable metadata. The American Heritage Dictionary defines semantics as ―the meaning or the interpretation of a word, sentence, or other language form‖ Embley [1]. The emergence of the Semantic Web will simplify and improve knowledge reuse on the Web and will change the way people can access knowledge, agents will be a knowledge primary consumer. By combining knowledge about their user and his needs with information collected from the Semantic Web, agents can perform tasks via Web services [2] automatically. So agents can understand and reason about information and use it to meet user’s needs. They can provide assistance using ontologies, axioms, and languages such as DARPA Agent Markup Language which are cornerstones of the Semantic Web. Data interoperability occurs when an application can use data from one or more disparate data sources. With the amount of data being produced, stored, and exchanged in the world today, there are numerous situations for which achieving data interoperability is essential. For example: Multiple organizations with their own data storage schemas, such as regional educational services, might merge into one, larger organization and consolidate their data. Also, a head office may require its various organizations to submit annual performance data in a particular format; this format may change from year to year. Two separate organizations having data about a certain topic may wish to exchange or merge this data; however, they do not want to share private data about their employees and finances. Finally, a supplier may wish to exchange data with a manufacturer. The common issue in these examples is that the data to be exchanged and/or integrated comes from separate sources that were developed independently. This means that the data might reside in completely different formats - for example, some data might be stored in a relational database, the other as XML files, even textual sources can provide data. In addition, because each data schema is designed independently, these schemas will be different - even if they are expressed in the same data model (e.g. the relational data model) and describe the same domain. In data integration, a mediated schema is used to provide a uniform query interface for multiple data sources. The mediated schema approach is often used in enterprise data integration, for example when various branches of the same organization merge. In this approach, the data stays in the individual source databases. Queries are expressed in terms of the mediated schema, while wrappers containing schema mappings between the source schemas and the mediated schema translate the queries and the results back and forth.
  • 2. Web Services Based Integration Tool for Heterogeneous Databases www.ijres.org 17 | Page Other approaches, often used in Web applications, are: peer-to-peer data integration where pairwise mappings are made directly between a number of individual data sources, and the data exchange where mapping is created between a source and a target schema with the goal of moving all of the data from the source database to the target database. Schema mappings represent a key to achieve data interoperability. It is a precise specification of the relationships between the elements of a source schema and the elements of a target schema. This specification makes it possible to transform data from the source schema to fit into the target schema. Executable schema mappings are schema mappings that can take an instance of a source schema and reform it to meet the syntax and integrity constraints of a target schema. The source and target schemas need not be in the same format; for instance, the source database might be a relational database while the target database could be stored in XML. Executable schema mappings can be expressed in any executable language that can be used to extract data from or input data into the databases, such as SQL, XQuery. The schema matching involves finding correspondences between pairs of individual elements of the source and target schemas. Taking as input a source schema S and a target schema T, this step outputs a multimapping which consists of pairs of correspondences between elements of S and elements of T. (Where elements, in a relational database, are the attributes of relations). The methods used for this step use clues from the labels of the schema attributes [3], the structures of the schemas [4], and occasionally lexical comparisons to words present in external taxonomies of words [5]. The most effective schema matchers LSD [6] use a hybrid of these techniques. Even the best schema matchers do not achieve 100% accuracy – for example, Doan et al. [6] reported 71% - 92% accuracy for their hybrid matcher, LSD, and noted that two specific characteristics of schemas preventing the accuracy from being higher were: ambiguity in the meaning of labels, and being unable to anticipate every type of format for the data. These deficiencies in accuracy are propagated to the next step in schema mapping creation, mapping generation. II. Related Work Brend Amann et al. [7], proposed ontology mediator architecture for the querying and integration of XML data sources. Cruz et al. [8] proposed mediator to providing data interoperability among different databases. Also, Philipi et al. [9] introduced architecture for ontology-driven data integration based on XML technology. Others presented some solutions to enhance the metadata representation as in Hunter et al. [10] by combining RDF and XML schemas, and Ngmnij et al. [11] by using metadata dictionary as for solving some semantic heterogeneity. In solving some problems in the query processing, Baoshi et al. [12] presented a query translation approach, Corby et al. [13] addressed the problem of a dedicated ontology-based query language, Saleh [14] presented a semantic framework that addresses the query mapping approach. In E. Mena et al. [15] OBSERVER is an approach for query processing in global information system. Yingge et al. [16] presented aSDMS system which utilizes software agent and Semantic Web technologies; they addressed the problem of improving the efficiency of information management across weak data. A data warehousing approach with ontology based query facility presented by Munir et al. [17]. Finaly, Al-Ghamdi, et al, [18], developed a software system based on ontology to semantically integrate heterogeneous data sources such as XML and RDF to solve some conflicts that occur in these sources. They used agent framework based on ontology to retrieve data from distributed heterogeneous data sources. They implemented this framework using some modules and libraries of Java, Aglet, Jena and AltovaXML. III. The Integration System High-level architecture Figure 1 illustrates the high-level architecture of the integration system ( tools) . The figure shows that the integration system has two tools (susb-systems): query Tool and integration tool. The integration tool reads the schema of each data store and the ontology-2 information and builds data warehouse schema (structure) for those data stores. The query tool receives a structural natural language query and analyzes it based on the ontology-1 information, and builds a SQL query to retrieve the required data from the datawarehouse.
  • 3. Web Services Based Integration Tool for Heterogeneous Databases www.ijres.org 18 | Page Fig. 1: High level architecture of the system Query Tool Integration Tool Datawarehouse Data Store #1 Data Store #n Data to be stored in the Datawarehouse Ontology-2 Ontology-1
  • 4. Web Services Based Integration Tool for Heterogeneous Databases www.ijres.org 19 | Page Integration sub-system (tool) A user Figure 2: The integrator sub-system The Architecture of the Integration System The system consists of two sub-systems: integration sub-system, and query sub-system as shown in figure 2. The integration-subsystem has a set of web services: retrieving web services, Gathering web service, datawarehouse schema-web-service-generator, and data integrator web service. In addition the integration sub- system contains ontology-2 that includes domain data dictionary. In the above architecture, the retrieving web-service-1 until retrieving web-service-n retrieve the updated or new data from the Data Source-1 unit Data Source-n and return the data to Gathering web service. Each retrieving web service checks off line its corresponding data source to retrieve the updated and new data. The gathering web service receives the retrieved data from all resources and store them in Temporally-Big-data- sources. This Big-data-resources has all tables of all data sources but all them are stored in the same format of a database engine. This means that gathering web service convert the retrieved tables from different formats to the format of Temporally-Big-data-sources. The ontology-2 holds the data dictionary of the application domain. The data dictionary are stored and updated by the business analyst. Datawarehouse schema-Web-service-generator reads data dictionary from ontology-2 and structure of all tables in Temporally-Big-data-sources and produce the structure of the data warehouse and the mapping metadata of the current application that are stored in the metadata mapping table. Data integrator web service integrates the data that exist in Temporally-Big-data-sources based on metadata mapping table and stores the integrated data in the datawarehouse. Data Source-1 Data Source-n Retrieving Web-service-1 Retrieving Web-service-n Gathering-Web service Temporally-Big-data sources Datawarehouse schema-Web-service- generator Ontology -2 (Datadictionary of a domain) Metadata Mapping table Data integrator web service
  • 5. Web Services Based Integration Tool for Heterogeneous Databases www.ijres.org 20 | Page The query sub-system contains a set of web services in addition to ontology-1. The web services are user interface web service, Query generator-web service, and Dataware-house web service. The user Interface Web service creates a user interface to the user where the user enters his query in formatted (has a syntax) natural language. Query generator-web service receives the formatted query from the interface web services and based on the triples that are stored in the ontology-1 creates SQL-statement. Data warehouse web service receives the created SQL-statement and retrieves the required data. The retrieved data is returned to the user interface web service to be displayed to the user. The data warehouse is shared between the two sub-system and it is built automatically based on specific database engine such as SQL_server or DB2 or others. Figure 3 shows a sequence diagram for the query subsystem. The diagram illustrates the dynamic behavior of the subsystem. In the sequence diagram, the user writes the query in natural language that accepted by the method write-snl-query() that has been implemented in user interface web service. The Generate-SQL-s (snl) message is sent by the user interface web service and received by Query generator web service that it generates SQL (structure query language statement) based on ontology-1. The generated SQL-s is sent with the Execute(SQL-s) message that is accepted by data warehouse web service. The data warehouse web service executes the statement by retrieving data from the data warehouse. The retrieved data is sent as actual argument of the message display-results(r-data) that is received by the user interface web service to be displayed. The sequence diagram of the integration subsystem is shown in figure 4a. Fig. 3: Sequence diagram of query tool User interface web service Query generator web service Datawarehouse web service Write-snl-query() Generate-SQL-s(snl) Execute(SQL-s) Display-results(r-data)
  • 6. Web Services Based Integration Tool for Heterogeneous Databases www.ijres.org 21 | Page Fig. 4a: Sequence diagram of the integration tool In sequence diagram(figure 4a), the retrieving web service checks a timer. If the timer value equals zero, then the ― retrieveNewData()‖ method is called to retrieve all new data in data sources. The ―storeRetrieveData(data)‖ method of gathering web service receives the retrieved data to store them in the temporally-big-data sources. The data integrator web service receives ―integrateDataAndStoreInDw()‖ to integrate the gathered data based on metadata mapping table and store them in datawarehouse. Figure 4b illustrates the sequence diagram of building the schema of datawarehouse. In the diagram the ―Datawarehouse schema-Web-service- generator‖ receives the message ―buildDwSchema()‖ message to build a new datawarehouse. The ―Datawarehouse schema-Web-service- generator‖ sends the ―collectSchemasOfDataStores()‖ message to ―gathering web service‖ to retrieve all schemas of all data stores. Each ―retrieving web service‖ receives the ―retrieveDataStoreSchema()‖ message from the ―gathering web service‖ to return the schema of its linked data store. Schemas of all data stores are returned to ―Datawarehouse schema-Web-service- generator‖ to read the data from ontology2 and finally creates a new datawrehouse. Retrieving web service Gathering web service Data integrator web service timerCheck() [timer=0] retrieveNewData() storeRetrievedData(data) integrateDataAndStoreInDw()
  • 7. Web Services Based Integration Tool for Heterogeneous Databases www.ijres.org 22 | Page Fig. 4b: Sequence diagram of building datawarehouse schema IV. Implementation A prototype has been implemented using Java, and ASP.Net environment. Sun Microsystems provide Java Development Kits (JSDK) for many platforms, a standard edition and enterprise edition. The used data sources in this prototype are XML, and RDF as they are dominant in data interchange. The XML data source has the following schema structure Figure 5: a. XML schema structure XML Isbn Title Auther Publisher Date Version Datawarehouse schema-Web- service- generator Gathering web service Retrieving web service buildDwSchema() collectSchemasOfDat aStores() retrieveDataStoresSchema()
  • 8. Web Services Based Integration Tool for Heterogeneous Databases www.ijres.org 23 | Page b- XML data sample Fig 5. XML data sources The RDF data source has the following schema structure Figure 6: a. RDF schema structure b- RDF data sample Fig 6. RDF data sources RDF Creator Title Identifier Publisher Date
  • 9. Web Services Based Integration Tool for Heterogeneous Databases www.ijres.org 24 | Page After that, a unified data source is generated from all the input different sources a. The unified Schema structure b- Unified data sample Fig 7. Unified data source Then, we can request information from the unified data source based in certain information item. For example, we can query book information based on it ISBN as in Figure 8. Unified Data Source isbn title auther publisher date version Source
  • 10. Web Services Based Integration Tool for Heterogeneous Databases www.ijres.org 25 | Page Fig. 8 : Querying the unified data source based on book ISBN V. Conclusion In this research we have built an integration system that collects data from different data sources that were generated by different database engines. Also, the system helps users to request data using structured natural language or structured query language. The system consists of two subsystems (tools): integration sub-system (tool) and query (sub-system) tool; the integration system has been built based on the web services technology. The integration tool has been built as a multi web services for integrating data from different data stores (databases) that were created with different database engines; there is a retrieving web-service for each engine. The integration sub-system (tool) creates the schema of the data warehouse automatically based on domain ontology that is associated with the tool. This means that the time of building data warehouse is reduced and the performance of the application totally is increased. The query sub-system (tool) has been built to help a user to query in a structured natural language or structured query language. The query sub-system uses local ontology that is associated with the system to understand the structured natural language query and concerted into SQL to retrieve the results from the data warehouse. The system has been implemented, tested. The system has many advantages: a database engine independent, domain independent, and smart system because it is based on ontology scheme. Our system also has good attributes: adaptable, reusable, and distribution. The system is adaptable because it can be used with any distributed system (platform or distributed system independent). It is reusable because its web services can be used in building similar tools without recompilation. The system is distributed means that its web services can be deployed on different machines in different locations. The system satisfy two non-functional requirements: scalability and performance. Scalability means that the system can serve any number of users without scarifying the performance. This is because the web services can be deployed on another machines to reduce the load on the existing machines. References [1] David W. Embley. ―Toward Semantic Understanding—an Approach Based on Information Extraction Ontologies―. In proceedings of the Fifteenth Australasian Database Conference (ADC’04), USA 2004. [2] Andreas Heß, Nicholas Kushmerick. ―Learning to Attach Semantic Metadata to Web Services‖. International Semantic Web Conference 2003.
  • 11. Web Services Based Integration Tool for Heterogeneous Databases www.ijres.org 26 | Page [3] W.W. Cohen, P. Ravikumar, and S.E. Fienberg. A comparison of string distance metrics for name matching tasks. Proceedings of the IJCAI-2003 Workshop on Information Integration on the Web (IIWeb-03), 2003. [4] S. Melnik, H. Garcia-Molina, and E. Rahm. Similarity flooding: a versatile graph matching algorithm and its application to schema matching. Proceedings 18th International Conference on Data Engineering, pages 117–128, 2002. [5] T. Pedersen, S. Patwardhan, and J. Michelizzi. Wordnet::Similarity - Measuring the relatedness of concepts. Proceedings of the National Conference on Artificial Intelligence, 19:1024–1025, 2004. [6] A. Doan, J. Madhavan, P. Domingos, and A. Halevy. Ontology matching: A machine learning approach, pages 385–516. Springer Verlag, Berlin, Heiderlberg, New York, 2003. [7] Brend Amann, Catriel Beeri, Irini Fundulaki, Michel Scholl. ― Querying XML Sources Using an Ontology-Based Mediator‖. In On the Move to Meaningful Internet Systems, Confederated International Conference DOA, CoopIS and ODBASE, pages 429-448, Springer-Verlag, 2002. [8] Isabel Cruz, Huiyong Xiao, Feihong Hsu. ―An Ontology-Based Framework for XML Semantic Integration‖. In 8th International Database Engineering and Applications Symposium (IDEAS 2004). [9] Stephan Philipi, Jacob Kohler. ―Using XML Technology for the Ontology-Based Semantic Integration of Life Science Databases‖. IEEE Transactions on Information Technology in Biomedicine, vol. 8 no. 2. June 2004. [10] Jane Hunter, Carl Lagoze. ―Combining RDF and XML Schemas to Enhance Interoperability Between Metadata Application Profiles‖. Copyright is held by the authors/owner(s). ACM. May 2001. [11] Ngmnij Arch-int, Peraphon Sophatsathit, Yuefeng Li. ―Ontology-Based Metadata Dicctionary for Integration Heterogeneous Information Sources on the WWW‖. Australian Computer Society Inc.. 2003. [12] Baoshi Yan, Robert MacGregor. ―Translating Naive User Queries on the Semantic Web‖. Proceedings in Semantic Integration Workshop, ISWC 2003. [13] Olivier Corby, Rose Dieng-Kuntz, Fabien Gandon. ―Approximate Query Processing Based on Ontologies‖. IEEE Intelligent Systems, IEEE 2006. [14] Mostafa Saleh.‖ Semantic Query in Heterogeneous Web Data Sources‖. International Journal of Computers and their Applications,USA, March 2008. [15] E. Mena, V. Kashyap, A. Sheth, A. Illarramendi. ― OBSERVER: An Approach for Query Processing in Global Information Systems Based on Interoperation Across Pre-existing Ontologies‖. In International Journal on Distributed and Parallel Databases (DAPD), ISSN 0926-8782, v.8 n.2, April 2000. [16] Yingge A. Wang, Elhadi Shakshuki. ―An Agent-based Semantic Web Department Content Management System‖. ITHET 6th Annual International Conference. 2005 IEEE. [17 K. Munir, M. Odeh, R. McClatchey, S. Khan, I. Habib. ―Semantic Information Retrieval from Distributed Heterogeneous Data Sources‖. CCS Research Centre, University of West of England, 2007. [18] N.Al-Ghamdi, M. Saleh, and F. Eassa, "Ontology-Based Query in Heterogeneous & Distributed Data Sources", International Journal of Electrical & Computer Sciences IJECS-IJENS Vol: 10 No: 06, 2010