SlideShare a Scribd company logo
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



S.NO    TITLE -2010                       ABSTRACT                                                       DOMAIN       PLATFORM

   1.        A Machine Learning           TCP throughput prediction is an important capability for       Networking   .net
             Approach to TCP              networks where multiple paths exist between data
             Throughput                   senders and receivers. In this paper, we describe a new
             Prediction                   lightweight method for TCP throughput prediction. Our
                                          predictor uses Support Vector Regression (SVR);
                                          prediction is based on both prior file transfer history and
                                          measurements of simple path properties. We evaluate
                                          our predictor in a laboratory setting where ground truth
                                          can be measured with perfect accuracy. We report the
                                          performance of our predictor for oracular and practical
                                          measurements of path properties over a wide range of
                                          traffic conditions and transfer sizes. For bulk transfers in
                                          heavy traffic using oracular measurements, TCP
                                          throughput is predicted within 10% of the actual value
                                          87% of the time, representing nearly a threefold
                                          improvement in accuracy over prior history-based
                                          methods. For practical measurements of path properties,
                                          predictions can be made within 10% of the actual value
                                          nearly 50% of the time, approximately a 60%
                                          improvement over history-based methods, and with
                                          much lower measurement traffic overhead. We
                                          implement our predictor in a tool called PathPerf, test it
                                          in the wide area, and show that PathPerf predicts TCP
                                          throughput accurately over diverse wide area paths.
   2. Feedback-Based                      A framework for designing feedback-based scheduling                         .net
      Scheduling for Load-                algorithms is proposed for elegantly solving the
      Balanced Two-Stage                  notorious packet missequencing problem of a load-
      Switches                            balanced switch. Unlike existing approaches, we show
                                          that the efforts made in load balancing and keeping
                                          packets in order can complement each other. Specifically,
                                          at each middle-stage port between the two switch fabrics
                                          of a load-balanced switch, only a single-packet buffer for
                                          each virtual output queueing (VOQ) is required. Although
                                          packets belonging to the same flow pass through
                                          different middle-stage VOQs, the delays they experience
                                          at different middle-stage ports will be identical. This is
                                          made possible by properly selecting and coordinating
                                          the two sequences of switch configurations to form a
                                          joint sequence with both staggered symmetry property
                                          and in-order packet delivery property. Based on the
                                          staggered symmetry property, an efficient feedback
                                          mechanism is designed to allow the right middle-stage
                                          port occupancy vector to be delivered to the right input
                                          port at the right time. As a result, the performance of
                                          load balancing as well as the switch throughput is
                                          significantly improved. We further extend this feedback
                                          mechanism to support the multicabinet implementation
                                          of a load-balanced switch, where the propagation delay
                                          between switch linecards and switch fabrics is
                                          nonnegligible. As compared to the existing load-balanced
                                          switch architectures and scheduling algorithms, our
                                          solutions impose a modest requirement on switch
                                          hardware, but consistently yield better delay-throughput
                                          performance. Last but not least, some extensions and
                                          refinements are made to address the scalability,
                                          implementation, and fairness issues of our solutions.
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

      http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



  3. Trust management in                In this paper, we propose a human-based model which                    .net
     mobile ad hoc networks             builds a trust relationship between nodes in an ad hoc
     using a scalable maturity          network. The trust is based on previous individual
     based model                        experiences and on the recommendations of others. We
                                        present the Recommendation Exchange Protocol (REP)
                                        which allows nodes to exchange recommendations about
                                        their neighbors. Our proposal does not require
                                        disseminating the trust information over the entire
                                        network. Instead, nodes only need to keep and exchange
                                        trust information about nodes within the radio range.
                                        Without the need for a global trust knowledge, our
                                        proposal scales well for large networks while still
                                        reducing the number of exchanged messages and
                                        therefore the energy consumption. In addition, we
                                        mitigate the effect of colluding attacks composed of liars
                                        in the network. A key concept we introduce is the
                                        relationship maturity, which allows nodes to improve
                                        the efficiency of the proposed model for mobile
                                        scenarios. We show the correctness of our model in a
                                        single-hop network through simulations. We also extend
                                        the analysis to mobile multihop networks, showing the
                                        benefits of the maturity relationship concept. We
                                        evaluate the impact of malicious nodes that send false
                                        recommendations to degrade the efficiency of the trust
                                        model. At last, we analyze the performance of the REP
                                        protocol and show its scalability. We show that our
                                        implementation of REP can significantly reduce the
                                        number messages.
  4. Online social networks             OSNs applications, it is a location-based social network     Network   .net
                                        services, security and privacy

                                        of OSNs, and human mobility models based on social
                                        network OSNs online service site focuses of social
                                        networks or social relations among people, e.g., who
                                        share interests and activities. A social network service
                                        essentially consists of a representation of each user
                                        (often a profile), his/her social links, and a variety of
                                        additional services. Most social network services are web
                                        based and provide means for users to interact over the
                                        internet, such as e-mail and instant messaging. Although
                                        online community services are sometimes considered as
                                        a social network online community services are group-
                                        centered. Social networking sites allow users to share
                                        ideas, activities, events, and interests within their
                                        individual networks.

  5. SYNCHRONIZATION OF                 File synchronization in computing is the process of
     LOCAL DESKTOP TO                   making sure that files in two or more locations are
     INTERNET USING FILE                updated through certain rules. In one-way file
     TRANSFER PROTOCOL                  synchronization, also called mirroring, updated files are
                                        copied from a 'source' location to one or more 'target'
                                        locations, but no files are copied back to the source
                                        location. In two-way file synchronization, updated files
                                        are copied in both directions, usually with the purpose of
                                        keeping the two locations identical to each other. In this
                                        article, the term synchronization refers exclusively to
                                        two-way file synchronization.
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

      http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



  6. Intrusion Detection for            Providing security in a distributed system requires more
     Grid and Cloud                     than user authentication with passwords or digital
     Computing                          certificates and confidentiality in data transmission. The
                                        Grid and Cloud Computing Intrusion Detection System
                                        integrates knowledge and behavior analysis to detect
                                        intrusions.

  7. Adaptive Physical                  Transmit power and carrier sense threshold are key
     Carrier Sense in                   MAC/PHY parameters in carrier sense multiple access
     Topology-Controlled                (CSMA) wireless networks. Transmit power control has
     Wireless Networks                  been extensively studied in the context of topology
                                        control. However, the effect of carrier sense threshold on
                                        topology control has not been properly investigated in
                                        spite of its crucial role. Our key motivation is that the
                                        performance of a topology-controlled network may
                                        become worse than that of a network without any
                                        topology control unless carrier sense threshold is
                                        properly chosen. In order to remedy this deficiency of
                                        conventional topology control, we present a framework
                                        on how to incorporate physical carrier sense into
                                        topology control. We identify that joint control of
                                        transmit power and carrier sense threshold can be
                                        efficiently divided into topology control and carrier
                                        sense adaptation. We devise a distributed carrier sense
                                        update algorithm (DCUA), by which each node drives its
                                        carrier sense threshold toward a desirable operating
                                        point in a fully distributed manner. We derive a sufficient
                                        condition for the convergence of DCUA. To demonstrate
                                        the utility of integrating physical carrier sense into
                                        topology control, we equip a localized topology control
                                        algorithm, LMST, with the capability of DCUA. Simulation
                                        studies show that LMST-DCUA significantly outperforms
                                        LMST and the standard

  8. On the Quality of Service of          We model the probabilistic behavior of a system            Dependable     .net
     Crash-Recovery Failure                comprising a failure detector and a monitored crash-       and Security
     Detectors                             recovery target. We extend failure detectors to take
                                           account of failure recovery in the target system. This
                                           involves extending QoS measures to include the
                                           recovery detection speed and proportion of failures
                                           detected. We also extend estimating the parameters of
                                           the failure detector to achieve a required QoS to
                                           configuring the crash-recovery failure detector. We
                                           investigate the impact of the dependability of the
                                           monitored process on the QoS of our failure detector.
                                           Our analysis indicates that variation in the MTTF and
                                           MTTR of the monitored process can have a significant
                                           impact on the QoS of our failure detector. Our analysis
                                           is supported by simulations that validate our
                                           theoretical results.
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

      http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



  9. Layered Approach using                Intrusion detection faces challenges an intrusion
     conditional random field              detection system must constantly detect malicious
                                           activities in a network and must perform efficiently to
                                           cope with the large amount of network traffic. These
                                           two issues of Accuracy and Efficiency using
                                           Conditional Random Fields and Layered Approach. We
                                           show that high attack detection accuracy can be
                                           achieved by using Conditional Random Fields and high
                                           efficiency by implementing the Layered Approach.
                                           Experimental results on the benchmark KDD ’99
                                           intrusion data set show that our proposed system
                                           based on Layered Conditional Random Fields
                                           outperforms other well-known methods such as the
                                           decision trees and the naive Bayes. The improvement
                                           in attack detection accuracy is very high, particularly,
                                           for the U2R attacks (34.8 percent improvement) and
                                           the R2L attacks (34.5 percent improvement).
                                           Statistical Tests also demonstrate higher confidence in
                                           detection accuracy for our method. Finally, we show
                                           that our system is robust and is able to handle noisy

                                           data without compromising performance.



  10. Privacy-Preserving Sharing           Privacy-preserving sharing of sensitive information        Security and   .net
      of Sensitive Information             (PPSSI) is motivated by the increasing need for entities   privacy
                                           (organizations or individuals) that don't fully trust
                                           each other to share sensitive information. Many types
                                           of entities need to collect, analyze, and disseminate
                                           data rapidly and accurately, without exposing
                                           sensitive information to unauthorized or untrusted
                                           parties. Although statistical methods have been used
                                           to protect data for decades, they aren't foolproof and
                                           generally involve a trusted third party. Recently, the
                                           security research community has studied—and, in a
                                           few cases, deployed—techniques using secure,
                                           multiparty function evaluation, encrypted keywords,
                                           and private information retrieval. However, few
                                           practical tools and technologies provide data privacy,
                                           especially when entities have certain common goals
                                           and require (or are mandated) some sharing of
                                           sensitive information. To this end, PPSSI technology
                                           aims to enable sharing information, without exposing
                                           more than the minimum necessary to complete a
                                           common task.

  11. PEACE                                Security and privacy issues are of most concern in
                                           pushing the success of WMNs(Wireless Mesh
                                           Networks) for their wide deployment and for
                                           supporting service-oriented applications. Despite the
                                           necessity, limited security research has been
                                           conducted toward privacy preservation in WMNs. This
                                           motivates us to develop PEACE, a novel Privacy-
                                           Enhanced
                                           yet Accountable security framework, tailored for
                                           WMNs
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

      http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



  12. The Phish-Market Protocol:           One way banks mitigate phishing's effects is to remove                      .net
      Secure Sharing Between               fraudulent websites or suspend abusive domain
      Competitors                          names. The removal process, called a "take-down," is
                                           often subcontracted to specialist firms, who refuse to
                                           share feeds of phishing website URLs with each other.
                                           Consequently, many phishing websites aren't
                                           removed. The take-down companies are reticent to
                                           exchange feeds, fearing that competitors with less
                                           comprehensive lists might free-ride off their efforts.
                                           Here, the authors propose the Phish-Market protocol,
                                           which enables companies to be compensated for
                                           information they provide to their competitors,
                                           encouraging them to share. The protocol is designed
                                           so that the contributing firm is compensated only for
                                           those websites affecting its competitor's clients and
                                           only those previously unknown to the receiving firm.
                                           The receiving firm, on the other hand, is guaranteed
                                           privacy for its client list. The protocol solves a more
                                           general problem of sharing between competitors;
                                           applications to data brokers in marketing, finance,
                                           energy exploration, and beyond could also benefit.

  13. Internet Filtering Issues            Various governments have been considering                                   .net
      and Challenges                       mechanisms to filter out illegal or offensive Internet
                                           material. The accompanying debate raises a number of
                                           questions from a technical perspective. This article
                                           explores some of these questions, such as, what
                                           filtering techniques exist,are they effective in filtering
                                           out the specific content, how easy is circumventing
                                           them ,where should they be placed in the Internet
                                           architecture.

  14. Can Public-Cloud Security            Because cloud-computing environments' security                              .net
      Meet Its Unique                      vulnerabilities differ from those of traditional data
      Challenges?                          centers, perimeter-security approaches will no longer
                                           work. Security must move from the perimeter to the
                                           virtual machines.

  15. Encrypting Keys Securely             Encryption keys are sometimes encrypted themselves;                         .net
                                           doing that properly requires special care. Although it
                                           might look like an oversight at first, the broadly
                                           accepted formal security definitions for cryptosystems
                                           don't allow encryption of key-dependent messages.
                                           Furthermore, key-management systems frequently
                                           use key encryption or wrapping, which might create
                                           dependencies among keys that lead to problems with
                                           simple access-control checks. Security professionals
                                           should be aware of this risk and take appropriate
                                           measures. Novel cryptosystems offer protection for
                                           key-dependent messages and should be considered for
                                           practical use. Through enhanced access control in key-
                                           management systems, you can prevent security-
                                           interface attacks.

  16. Auto-Context and Its                 The notion of using context information for solving          Pattern        .net
      Application to High-Level            high-level vision and medical image segmentation             Analysis and
      Vision Tasks and 3D Brain            problems has been increasingly realized in the field.        Machine
      Image Segmentation                   However, how to learn an effective and efficient             Intelligence
                                           context model, together with an image appearance
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

      http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                           model, remains mostly unknown. The current
                                           literature using Markov Random Fields (MRFs) and
                                           Conditional Random Fields (CRFs) often involves
                                           specific algorithm design in which the modeling and
                                           computing stages are studied in isolation. In this
                                           paper, we propose a learning algorithm, auto-context.
                                           Given a set of training images and their corresponding
                                           label maps, we first learn a classifier on local image
                                           patches. The discriminative probability (or
                                           classification confidence) maps created by the learned
                                           classifier are then used as context information, in
                                           addition to the original image patches, to train a new
                                           classifier. The algorithm then iterates until
                                           convergence. Auto-context integrates low-level and
                                           context information by fusing a large number of low-
                                           level appearance features with context and implicit
                                           shape information. The resulting discriminative
                                           algorithm is general and easy to implement. Under
                                           nearly the same parameter settings in training, we
                                           apply the algorithm to three challenging vision
                                           applications: foreground/background segregation,
                                           human body configuration estimation, and scene
                                           region labeling. Moreover, context also plays a very
                                           important role in medical/brain images where the
                                           anatomical structures are mostly constrained to
                                           relatively fixed positions. With only some slight
                                           changes resulting from using 3D instead of 2D
                                           features, the auto-context algorithm applied to brain
                                           MRI image segmentation is shown to outperform
                                           state-of-the-art algorithms specifically designed for
                                           this domain. Furthermore, the scope of the proposed
                                           algorithm goes beyond image analysis and it has the
                                           potential to be used for a wide variety of problems for
                                           structured prediction problems.

  17. CSMA protocol Mitigating             This system is developed to show the descriptive            java
      Performance Degradation              management of dreadful conditions in Congested
      in Congested Sensor                  Sensor Networks. The dreadful conditions in sensor
      Networks                             networks or any other wired networks will happen
                                           when bandwidth differs from receiving and sending
                                           points. The channel capacity of the network may not
                                           be sufficient enough to handle the speed of packets
                                           sent. In this system, we are presenting a view, how the
                                           data can be sent through the congested channel and
                                           also the safe delivery of the packets to the destination.
                                           This System is developed using java swing technology
                                           with jdk1.6. All the nodes are developed as swing
                                           API‘s.Multiple API‘s form a sink to the destination. The
                                           packets will be sent from Source to destination, via
                                           sink. In the sink, a node will be made congested and
                                           using channel capacity, the path of data will be
                                           calculated. Based on the result of the calculation, the
                                           congestion in the sink will be dissolved and data is set
                                           free to the destination.This system is an application to
                                           maintain the free flow of data in congested sensor
                                           networks using Differentiated Routing Protocol and
                                           Priority Queues, which maintain priority in data-types.
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

      http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



  18. Feature Analysis and                 The definition of parameters is a crucial step in the       Multimedia    .net
      Evaluation for Automatic             development of a system for identifying emotions in
      Emotion Identification in            speech. Although there is no agreement on which are
      Speech                               the best features for this task, it is generally accepted
                                           that prosody carries most of the emotional
                                           information. Most works in the field use some kind of
                                           prosodic features, often in combination with spectral
                                           and voice quality parametrizations. Nevertheless, no
                                           systematic study has been done comparing these
                                           features. This paper presents the analysis of the
                                           characteristics of features derived from prosody,
                                           spectral envelope, and voice quality as well as their
                                           capability to discriminate emotions. In addition, early
                                           fusion and late fusion techniques for combining
                                           different information sources are evaluated. The
                                           results of this analysis are validated with experimental
                                           automatic emotion identification tests. Results suggest
                                           that spectral envelope features outperform the
                                           prosodic ones. Even when different parametrizations
                                           are combined, the late fusion of long-term spectral
                                           statistics with short-term spectral envelope
                                           parameters provides an accuracy comparable to that
                                           obtained when all parametrizations are combined.

  19. Automatic Detection of Off-          Identifying off-task behaviors in intelligent tutoring      Learning      .net
      Task Behaviors in                    systems is a practical and challenging research topic.      Technologie
      Intelligent Tutoring                 This paper proposes a machine learning model that           s
      Systems with Machine                 can automatically detect students' off-task behaviors.
      Learning Techniques                  The proposed model only utilizes the data available
                                           from the log files that record students' actions within
                                           the system. The model utilizes a set of time features,
                                           performance features, and mouse movement features,
                                           and is compared to 1) a model that only utilizes time
                                           features and 2) a model that uses time and
                                           performance features. Different students have
                                           different types of behaviors; therefore, personalized
                                           version of the proposed model is constructed and
                                           compared to the corresponding nonpersonalized
                                           version. In order to address data sparseness problem,
                                           a robust Ridge Regression algorithm is utilized to
                                           estimate model parameters. An extensive set of
                                           experiment results demonstrates the power of using
                                           multiple types of evidence, the personalized model,
                                           and the robust Ridge Regression algorithm.

  20. Web-Application Security:            Here's a sobering thought for all managers responsible      IT            .net
      From Reactive to Proactive           for Web applications: Without proactive consideration
                                           for an application's security, attackers can bypass
                                           nearly all lower-layer security controls simply by
                                           using the application in a way its developers didn't
                                           envision. Learn how to address vulnerabilities
                                           proactively and early on to avoid the devastating
                                           consequences of a successful attack.

  21. Trust and Reputation                 Trust and reputation management research is highly          INTERNET      .net
      Management                           interdisciplinary, involving researchers from               COMPUTING
                                           networking and communication, data management
                                           and information systems, e-commerce and service
                                           computing, artificial intelligence, and game theory, as
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

      http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                           well as the social sciences and evolutionary biology.
                                           Trust and reputation management has played and will
                                           continue to play an important role in Internet and
                                           social computing systems and applications. This
                                           special issue addresses key issues in the field, such as
                                           representation, recommendation aggregation, and
                                           attack-resilient reputation systems.

  22. Multi-body Structure-and-            An efficient and robust framework is proposed for           Image        .net
      Motion Segmentation by               two-view multiple structure-and-motion segmentation         Processing
      Branch-and-Bound Model               of unknown number of rigid objects. The segmentation
      Selection                            problem has three unknowns, namely the object
                                           memberships, the corresponding fundamental
                                           matrices, and the number of objects. To handle this
                                           otherwise recursive problem, hypotheses for
                                           fundamental matrices are generated through local
                                           sampling. Once the hypotheses are available, a
                                           combinatorial selection problem is formulated to
                                           optimize a model selection cost which takes into
                                           account the hypotheses likelihoods and the model
                                           complexity. An explicit model for outliers is also added
                                           for robust segmentation. The model selection cost is
                                           minimized through the branch-and-bound technique
                                           of combinatorial optimization. The proposed branch-
                                           and-bound approach efficiently searches the solution
                                           space and guaranties optimality over the current set of
                                           hypotheses. The efficiency and the guarantee of
                                           optimality of the method is due to its ability to reject
                                           solutions without explicitly evaluating them. The
                                           proposed approach was validated with synthetic data,
                                           and segmentation results are presented for real
                                           images.

  23. Active Image Re ranking              Image search reranking methods usually fail to                           .net
                                           capture the user's intention when the query term is
                                           ambiguous. Therefore, reranking with user
                                           interactions, or active reranking, is highly demanded
                                           to effectively improve the search performance. The
                                           essential problem in active reranking is how to target
                                           the user's intention. To complete this goal, this paper
                                           presents a structural information based sample
                                           selection strategy to reduce the user's labeling efforts.
                                           Furthermore, to localize the user's intention in the
                                           visual feature space, a novel local-global
                                           discriminative dimension reduction algorithm is
                                           proposed. In this algorithm, a submanifold is learned
                                           by transferring the local geometry and the
                                           discriminative information from the labelled images to
                                           the whole (global) image database. Experiments on
                                           both synthetic datasets and a real Web image search
                                           dataset demonstrate the effectiveness of the proposed
                                           active reranking scheme, including both the structural
                                           information based active sample selection strategy
                                           and the local-global discriminative dimension
                                           reduction algorithm.

  24. Content Based Image                  An innovative approach based on an evolutionary                          .net
      Retrieval using PSO                  stochastic algorithm, namely the Particle Swarm
                                           Optimizer (PSO), is proposed in this paper as a
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

      http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                           solution to the problem of intelligent retrieval of
                                           images in large databases. The problem is recast to an
                                           optimization one, where a suitable cost function is
                                           minimized through a customized PSO. Accordingly, the
                                           relevance-feedback is used in order to exploit the
                                           information of the user with the aim of both guiding
                                           the particles inside the search space and dynamically
                                           assigning different weights to the features.

  25. Automatic Composition of             This paper presents a novel approach for semantic                       .net
      Semantic Web Services An             web service composition based on traditional state
      Enhanced State Space                 space search approach. We regard automatic web
      Search Approach                      service composition problem as an AI problem-solving
                                           problem and propose an enhanced state space search
                                           approach toward web service composition domain.
                                           This approach can not only be used for automatic
                                           service composition, but also for general problem-
                                           solving domain. In addition, in order to validate the
                                           feasibility of our approach, a prototype system is
                                           implemented.

  26. Knowledge-first web                  Although semantic technologies aren't used in current                   .net
      services an E-Government             software systems on a large scale yet, they offer high
      example                              potential to significantly improve the quality of
                                           electronic services especially in the E-Government
                                           domain. This paper therefore presents an approach
                                           that not only incorporates semantic technologies but
                                           allows to create E-Government services solely based
                                           on semantic models. This multiplies the benefits of the
                                           ontology modeling efforts, minimizes development
                                           and maintenance time and costs, improves user
                                           experience and enforces transparency.

  27. The Applied Research of              This paper firstly introduces the characteristics of the    Cloud       .net
      Cloud Computing Platform             current E-Learning, and then analyzes the concept and
      Architecture In the E-               characteristics of cloud computing, and describes the       computing
      Learning Area                        architecture of cloud computing platform; by
                                           combining the characteristics of E-Learning and
                                           learning from current major infrastructure approach
                                           of cloud computing platform, this paper structures a
                                           relatively complete set of integration and use in one of
                                           the E-Learning platform, puts the cloud computing
                                           platform apply to the study of E-Learning, and focus on
                                           the application in order to improve the resources'
                                           stability, balance and utilization; under the conditions,
                                           this platform will meet the demand for the current
                                           teaching and research activities, improve the greatest
                                           value of the E-Learning.

  28. Cloud Computing System               Cloud computing provides people a way to share large                    .net
      Based on Trusted                     mount of distributed resources belonging to different
      Computing Platform                   organizations. That is a good way to share many kinds
                                           of distributed resources, but it also makes security
                                           problems more complicate and more important for
                                           users than before. In this paper, we analyze some
                                           security requirements in cloud computing
                                           environment. Since the security problems both in
                                           software and hardware, we provided a method to
                                           build a trusted computing environment for cloud
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                             computing by integrating the trusted computing
                                             platform (TCP) into cloud computing system. We
                                             propose a new prototype system, in which cloud
                                             computing system is combined with Trusted Platform
                                             Support Service (TSS) and TSS is based on Trusted
                                             Platform Module (TPM). In this design, better effect
                                             can be obtained in authentication, role based access
                                             and data protection in cloud computing environment.

  29.

        IT Auditing to Assure a              In this paper we discuss the evolvement of cloud           .net
        Secure Cloud Computing.              computing paradigm and present a framework for
                                             secure cloud computing through IT auditing. Our
                                             approach is to establish a general framework using
                                             checklists by following data flow and its lifecycle. The
                                             checklists are made based on the cloud deployment
                                             models and cloud services models. The contribution of
                                             the paper is to understand the implication of cloud
                                             computing and what is meant secure cloud computing
                                             via IT auditing rather than propose a new
                                             methodology and new technology to secure cloud
                                             computing. Our holistic approach has strategic value
                                             to those who are using or consider using cloud
                                             computing because it addresses concerns such as
                                             security, privacy and regulations and compliance.

  30. Performance Evaluation of              Advanced computing on cloud computing                      .net
      Cloud Computing Offerings              infrastructures can only become viable alternative for
                                             the enterprise if these infrastructures can provide
                                             proper levels of nonfunctional properties (NPFs). A
                                             company that focuses on service-oriented
                                             architectures (SOA) needs to know what configuration
                                             would provide the proper levels for individual services
                                             if they are deployed in the cloud. In this paper we
                                             present an approach for performance evaluation of
                                             cloud computing configurations. While cloud
                                             computing providers assure certain service levels, this
                                             it typically done for the platform and not for a
                                             particular service instance. Our approach focuses on
                                             NFPs of individual services and thereby provides a
                                             more relevant and granular information. An
                                             experimental evaluation in Amazon Elastic Compute
                                             Cloud (EC2) verified our approach.

  31. Providing Privacy                      People can only enjoy the full benefits of Cloud           .net
      Preserving in cloud                    computing if we can address the very real privacy and
      computing                              security concerns that come along with storing
                                             sensitive personal information in databases and
                                             software scattered around the Internet. There are
                                             many service provider in the internet, we can call each
                                             service as a cloud, each cloud service will exchange
                                             data with other cloud, so when the data is exchanged
                                             between the clouds, there exist the problem of
                                             disclosure of privacy. So the privacy disclosure
                                             problem about individual or company is inevitably
                                             exposed when releasing or sharing data in the cloud
                                             service. Privacy is an important issue for cloud
                                             computing, both in terms of legal compliance and user
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

      http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                           trust, and needs to be considered at every phase of
                                           design. Our paper provides some privacy preserving
                                           technologies used in cloud computing services.

  32. VEBEK: Virtual Energy-               Designing cost-efficient, secure network protocols for      Wireless    .net
      Based Encryption and                 Wireless Sensor Networks (WSNs) is a challenging            Computing
      Keying for Wireless Sensor           problem because sensors are resource-limited
      Networks                             wireless devices. Since the communication cost is the
                                           most dominant factor in a sensor's energy
                                           consumption, we introduce an energy-efficient Virtual
                                           Energy-Based Encryption and Keying (VEBEK) scheme
                                           for WSNs that significantly reduces the number of
                                           transmissions needed for rekeying to avoid stale keys.
                                           In addition to the goal of saving energy, minimal
                                           transmission is imperative for some military
                                           applications of WSNs where an adversary could be
                                           monitoring the wireless spectrum. VEBEK is a secure
                                           communication framework where sensed data is
                                           encoded using a scheme based on a permutation code
                                           generated via the RC4 encryption mechanism. The key
                                           to the RC4 encryption mechanism dynamically
                                           changes as a function of the residual virtual energy of
                                           the sensor. Thus, a one-time dynamic key is employed
                                           for one packet only and different keys are used for the
                                           successive packets of the stream. The intermediate
                                           nodes along the path to the sink are able to verify the
                                           authenticity and integrity of the incoming packets
                                           using a predicted value of the key generated by the
                                           sender's virtual energy, thus requiring no need for
                                           specific rekeying messages. VEBEK is able to efficiently
                                           detect and filter false data injected into the network by
                                           malicious outsiders. The VEBEK framework consists of
                                           two operational modes (VEBEK-I and VEBEK-II), each
                                           of which is optimal for different scenarios. In VEBEK-I,
                                           each node monitors its one-hop neighbors where
                                           VEBEK-II statistically monitors downstream nodes.
                                           We have evaluated VEBEK's feasibility and
                                           performance analytically and through simulations. Our
                                           results show that VEBEK, without incurring
                                           transmission overhead (increasing packet size or
                                           sending control messages for rekeying), is able to
                                           eliminate malicious data from the network in an
                                           energy-efficient manner. We also show that our
                                           framework performs be- - tter than other comparable
                                           schemes in the literature with an overall 60-100
                                           percent improvement in energy savings without the
                                           assumption of a reliable medium access control layer.

  33. Secure Data Collection in            Compromised node and denial of service are two key                      .net
      Wireless Sensor Networks             attacks in wireless sensor networks (WSNs). In this
      Using Randomized                     paper, we study data delivery mechanisms that can
      Dispersive Routes                    with high probability circumvent black holes formed
                                           by these attacks. We argue that classic multipath
                                           routing approaches are vulnerable to such attacks,
                                           mainly due to their deterministic nature. So once the
                                           adversary acquires the routing algorithm, it can
                                           compute the same routes known to the source, hence,
                                           making all information sent over these routes
                                           vulnerable to its attacks. In this paper, we develop
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

      http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                           mechanisms that generate randomized multipath
                                           routes. Under our designs, the routes taken by the ??
                                           shares?? of different packets change over time. So even
                                           if the routing algorithm becomes known to the
                                           adversary, the adversary still cannot pinpoint the
                                           routes traversed by each packet. Besides randomness,
                                           the generated routes are also highly dispersive and
                                           energy efficient, making them quite capable of
                                           circumventing black holes. We analytically investigate
                                           the security and energy performance of the proposed
                                           schemes. We also formulate an optimization problem
                                           to minimize the end-to-end energy consumption under
                                           given security constraints. Extensive simulations are
                                           conducted to verify the validity of our mechanisms.

  34. Aging Bloom Filter with              A Bloom filter is a simple but powerful data structure     Data Mining   .net
      Two Active Buffers for               that can check membership to a static set. As Bloom
      Dynamic Sets.                        filters become more popular for network applications,
                                           a membership query for a dynamic set is also required.
                                           Some network applications require high-speed
                                           processing of packets. For this purpose, Bloom filters
                                           should reside in a fast and small memory, SRAM. In
                                           this case, due to the limited memory size, stale data in
                                           the Bloom filter should be deleted to make space for
                                           new data. Namely the Bloom filter needs aging like
                                           LRU caching. In this paper, we propose a new aging
                                           scheme for Bloom filters. The proposed scheme
                                           utilizes the memory space more efficiently than double
                                           buffering, the current state of the art. We prove
                                           theoretically that the proposed scheme outperforms
                                           double buffering. We also perform experiments on real
                                           Internet traces to verify the effectiveness of the
                                           proposed scheme.

  35. Bayesian Classifiers                 The Bayesian classifier is a fundamental classification                  .net
      Programmed in SQL                    technique. In this work, we focus on programming
                                           Bayesian classifiers in SQL. We introduce two
                                           classifiers: naive Bayes and a classifier based on class
                                           decomposition using K-means clustering. We consider
                                           two complementary tasks: model computation and
                                           scoring a data set. We study several layouts for tables
                                           and several indexing alternatives. We analyze how to
                                           transform equations into efficient SQL queries and
                                           introduce several query optimizations. We conduct
                                           experiments with real and synthetic data sets to
                                           evaluate classification accuracy, query optimizations,
                                           and scalability. Our Bayesian classifier is more
                                           accurate than naive Bayes and decision trees. Distance
                                           computation is significantly accelerated with
                                           horizontal layout for tables, denormalization, and
                                           pivoting. We also compare naive Bayes
                                           implementations in SQL and C++: SQL is about four
                                           times slower. Our Bayesian classifier in SQL achieves
                                           high classification accuracy, can efficiently analyze
                                           large data sets, and has linear scalability.
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

      http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



  36. Using a web-based tool to            Top-down process improvement approaches provide a           java
      define and implement                 high-level model of what the process of a software
      software process                     development organisation should be. Such models are
      improvement initiatives in           based on the consensus of a designated working group
      a small industrial setting           on how software should be developed or maintained.
                                           They are very useful in that they provide general
                                           guidelines on where to start improving, and in which
                                           order, to people who do not know how to do it.
                                           However, the majority of models have only worked in
                                           scenarios within large companies. The authors aim to
                                           help small software development organisations adopt
                                           an iterative approach by providing a process
                                           improvement web-based tool. This study presents
                                           research into a proposal which states that a small
                                           organisation may use this tool to assess and improve
                                           their software process, identifying and implementing a
                                           set of agile project management practices that can be
                                           strengthened using the CMMI-DEV 1.2 model as
                                           reference.


  37. An Online Monitoring
      2                                    Web service technology aims to enable the                   Java
      Approach for Web Service             interoperation of heterogeneous systems and the
      Requirements                         reuse of distributed functions in an unprecedented
      (An Online Monitoring                scale and has achieved significant success. There are
      Approach for Web Service             still, however, challenges to realize its full potential.
      Requirements –web                    One of these challenges is to ensure the behaviour of
      services(ME))                        Web services consistent with their requirements.
                                           Monitoring events that are relevant to
                                           Web service requirements is, thus, an important
                                           technique. This paper introduces an online monitoring
                                           approach for Web service requirements. It includes a
                                           pattern-based specification of service constraints that
                                           correspond to service requirements, and a monitoring
                                           model that covers five kinds of system events relevant
                                           to client request, service response, application,
                                           resource, and management, and a monitoring
                                           framework in which different probes and agents
                                           collect events and data that are sensitive to
                                           requirements. The framework analyzes the collected
                                           information against the prespecified constraints, so as
                                           to evaluate the behaviour and use of Web
                                           services. The prototype implementation and
                                           experiments with a case study shows that our
                                           approach is effective and flexible, and the monitoring
                                           cost is affordable.
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



S.NO            TITLE -2011                                         ABSTRACT                               DOMAIN       PLATFORM

   1.        Exploiting Dynamic           In recent years ad hoc parallel data processing has            Parallel
             Resource Allocation          emerged to be one of the killer applications for               Distribution
             for Efficient Parallel       Infrastructure-as-a-Service (IaaS) clouds. Major Cloud
             Data Processing in           computing companies have started to integrate
             the Cloud                    frameworks for parallel data processing in their product
                                          portfolio, making it easy for customers to access these
                                          services and to deploy their programs. However, the
                                          processing frameworks which are currently used have
                                          been designed for static, homogeneous cluster setups
                                          and disregard the particular nature of a cloud.
                                          Consequently, the allocated compute resources may be
                                          inadequate for big parts of the submitted job and
                                          unnecessarily increase processing time and cost. In this
                                          paper, we discuss the opportunities and challenges for
                                          efficient parallel data processing in clouds and present
                                          our research project Nephele. Nephele is the first data
                                          processing framework to explicitly exploit the dynamic
                                          resource allocation offered by today's IaaS clouds for
                                          both, task scheduling and execution. Particular tasks of a
                                          processing job can be assigned to different types of
                                          virtual machines which are automatically instantiated
                                          and terminated during the job execution. Based on this
                                          new framework, we perform extended evaluations of
                                          MapReduce-inspired processing jobs on an IaaS cloud
                                          system and compare the results to the popular data
                                          processing framework Hadoop.
   2.        Data integrity proofs        Cloud computing has been envisioned as the de-facto            Communicat
             in cloud storage             solution to the rising storage costs of IT Enterprises.        ion System &
                                          With the high costs of data storage devices as well as the     network
                                          rapid rate at which data is being generated it proves
                                          costly for enterprises or individual users to frequently
                                          update their hardware. Apart from reduction in storage
                                          costs data outsourcing to the cloud also helps in reducing
                                          the maintenance. Cloud storage moves the user’s data to
                                          large data centers, which are remotely located, on which
                                          user does not have any control. However, this unique
                                          feature of the cloud poses many new security challenges
                                          which need to be clearly understood and resolved. One of
                                          the important concerns that need to be addressed is to
                                          assure the customer of the integrity i.e. correctness of his
                                          data in the cloud. As the data is physically not accessible
                                          to the user the cloud should provide a way for the user to
                                          check if the integrity of his data is maintained or is
                                          compromised. In this paper we provide a scheme which
                                          gives a proof of data integrity in the cloud which the
                                          customer can employ to check the correctness of his data
                                          in the cloud. This proof can be agreed upon by both the
                                          cloud and the customer and can be incorporated in the
                                          Service level agreement (SLA). This scheme ensures that
                                          the storage at the client side is minimal which will be
                                          beneficial for thin clients.

   3.        Efficient Computing          In many applications, including location based services,       Knowledge
             of Range Aggregates          queries are not precise. In this paper, we study the           & data
             against Uncertain            problem of efficiently computing range aggregates in a         engineering
             Location Based               multi-dimensional space when the query location is
                                          uncertain. That is, for a set of data points P, an uncertain
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

       http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



            Collections                  location based query Q with location described by a
                                         probabilistic density function, we want to calculate the
                                         aggregate information (e.g., count, average} and sum) of
                                         the data points within distance gamma to Q with
                                         probability at least theta. We propose novel, efficient
                                         techniques to solve the problem based on a filtering-and-
                                         verification framework. In particular, two novel filtering
                                         techniques are proposed to effectively and efficiently
                                         remove data points from verification. Finally, we show
                                         that our techniques can be immediately extended to
                                         solve the range query problem. Comprehensive
                                         experiments conducted on both real and synthetic data
                                         demonstrate the efficiency and scalability of our
                                         techniques.

  4.        Exploring                    Natural phenomena show that many creatures form                 Knowledge
            Application-Level            large social groups and move in regular patterns.               & Data
            Semantics for Data           However, previous works focus on finding the movement           Engineering
            Compression                  patterns of each single object or all objects. In this paper,
                                         we first propose an efficient distributed mining
                                         algorithm to jointly identify a group of moving objects
                                         and discover their movement patterns in wireless sensor
                                         networks. Afterward, we propose a compression
                                         algorithm, called 2P2D, which exploits the obtained
                                         group movement patterns to reduce the amount of
                                         delivered data. The compression algorithm includes a
                                         sequence merge and an entropy reduction phases. In the
                                         sequence merge phase, we propose a Merge algorithm to
                                         merge and compress the location data of a group of
                                         moving objects. In the entropy reduction phase, we
                                         formulate a Hit Item Replacement (HIR) problem and
                                         propose a Replace algorithm that obtains the optimal
                                         solution. Moreover, we devise three replacement rules
                                         and derive the maximum compression ratio. The
                                         experimental results show that the proposed
                                         compression algorithm leverages the group movement
                                         patterns to reduce the amount of delivered data
                                         effectively and efficiently.
  5.        Improving Aggregate          Recommender systems are becoming increasingly                   Knowledge
            Recommendation               important to individual users and businesses for                & Data
            Diversity Using              providing personalized recommendations. However,                Engineering
            Ranking-Based                while the majority of algorithms proposed in
            Techniques                   recommender systems literature have focused on
                                         improving recommendation accuracy (as exemplified by
                                         the recent Netflix Prize competition), other important
                                         aspects of recommendation quality, such as the diversity
                                         of recommendations, have often been overlooked. In this
                                         paper, we introduce and explore a number of item
                                         ranking techniques that can generate recommendations
                                         that have substantially higher aggregate diversity across
                                         all users while maintaining comparable levels of
                                         recommendation accuracy. Comprehensive empirical
                                         evaluation consistently shows the diversity gains of the
                                         proposed techniques using several real-world rating
                                         datasets and different rating prediction algorithms.
  6.        Monitoring Service           Business processes are increasingly distributed and             Service
            Systems from a               open, making them prone to failure. Monitoring is,              Computing
            Language-Action              therefore, an important concern not only for the
                                         processes themselves but also for the services that
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

       http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



            Perspective                  comprise these processes. We present a framework for
                                         multilevel monitoring of these service systems. It
                                         formalizes interaction protocols, policies, and
                                         commitments that account for standard and extended
                                         effects following the language-action perspective, and
                                         allows specification of goals and monitors at varied
                                         abstraction levels. We demonstrate how the framework
                                         can be implemented and evaluate it with multiple
                                         scenarios that include specifying and monitoring open-
                                         service policy commitments.

  7.        One Size Does Not Fit        With the emergence of the deep Web databases,                Knowledge
            All Towards User-            searching in domains such as vehicles, real estate, etc.     & data
            and Query-                   has become a routine task. One of the problems in this       engineering
            Dependent Ranking            context is ranking the results of a user query. Earlier
            For Web Databases            approaches for addressing this problem have used
                                         frequencies of database values, query logs, and user
                                         profiles. A common thread in most of these approaches is
                                         that ranking is done in a user- and/or query-
                                         independent manner. This paper proposes a novel
                                         query- and user-dependent approach for ranking the
                                         results of Web database queries. We present a ranking
                                         model, based on two complementary notions of user and
                                         query similarity, to derive a ranking function for a given
                                         user query. This function is acquired from a sparse
                                         workload comprising of several such ranking functions
                                         derived for various user-query pairs. The proposed
                                         model is based on the intuition that similar users display
                                         comparable ranking preferences over the results of
                                         similar queries. We define these similarities formally in
                                         alternative ways and discuss their effectiveness both
                                         analytically and experimentally over two distinct Web
                                         databases.
  8.        Optimal Service              Cloud applications that offer data management services       Knowledge
            Pricing for a Cloud          are emerging. Such clouds support caching of data in         & data
            Cache                        order to provide quality query services. The users can       engineering
                                         query the cloud data, paying the price for the
                                         infrastructure they use. Cloud management necessitates
                                         an economy that manages the service of multiple users in
                                         an efficient, but also, resource-economic way that allows
                                         for cloud profit. Naturally, the maximization of cloud
                                         profit given some guarantees for user satisfaction
                                         presumes an appropriate price-demand model that
                                         enables optimal pricing of query services. The model
                                         should be plausible in that it reflects the correlation of
                                         cache structures involved in the queries. Optimal pricing
                                         is achieved based on a dynamic pricing scheme that
                                         adapts to time changes. This paper proposes a novel
                                         price-demand model designed for a cloud cache and a
                                         dynamic pricing scheme for queries executed in the
                                         cloud cache. The pricing solution employs a novel
                                         method that estimates the correlations of the cache
                                         services in an time-efficient manner. The experimental
                                         study shows the efficiency of the solution.
  9.        A Personalized               As a model for knowledge description and formalization,      Knowledge
            Ontology Model for           ontologies are widely used to represent user profiles in     & data
            Web Information              personalized web information gathering. However, when        engineering
            Gathering                    representing user profiles, many models have utilized
                                         only knowledge from either a global knowledge base or a
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                          user local information. In this paper, a personalized
                                          ontology model is proposed for knowledge
                                          representation and reasoning over user profiles. This
                                          model learns ontological user profiles from both a world
                                          knowledge base and user local instance repositories. The
                                          ontology model is evaluated by comparing it against
                                          benchmark models in web information gathering. The
                                          results show that this ontology model is successful.
  10.        A Branch-and-Bound           In branch-and-bound (B&B) schemes for solving a                Computers
             Algorithm for Solving        minimization problem, a better lower bound could prune
             the Multiprocessor           many meaningless branches which do not lead to an
             Scheduling Problem           optimum solution. In this paper, we propose several
             with Improved                techniques to refine the lower bound on the makespan in
             Lower Bounding               the multiprocessor scheduling problem (MSP). The key
             Techniques                   idea of our proposed method is to combine an efficient
                                          quadratic-time algorithm for calculating the Fernández's
                                          bound, which is known as the best lower bounding
                                          technique proposed in the literature with two
                                          improvements based on the notions of binary search and
                                          recursion. The proposed method was implemented as a
                                          part of a B&B algorithm for solving MSP, and was
                                          evaluated experimentally. The result of experiments
                                          indicates that the proposed method certainly improves
                                          the performance of the underlying B&B scheme. In
                                          particular, we found that it improves solutions generated
                                          by conventional heuristic schemes for more than 20
                                          percent of randomly generated instances, and for more
                                          than 80 percent of instances, it could provide a
                                          certification of optimality of the resulting solutions, even
                                          when the execution time of the B&B scheme is limited by
                                          one minute.
  11.        Design and                   Peer-to-peer (P2P) systems generate a major fraction of        Computers
             Evaluation of a Proxy        the current Internet traffic, and they significantly
             Cache for Peer-to-           increase the load on ISP networks and the cost of
             Peer Traffic                 running and connecting customer networks (e.g.,
                                          universities and companies) to the Internet. To mitigate
                                          these negative impacts, many previous works in the
                                          literature have proposed caching of P2P traffic, but very
                                          few (if any) have considered designing a caching system
                                          to actually do it. This paper demonstrates that caching
                                          P2P traffic is more complex than caching other Internet
                                          traffic, and it needs several new algorithms and storage
                                          systems. Then, the paper presents the design and
                                          evaluation of a complete, running, proxy cache for P2P
                                          traffic, called pCache. pCache transparently intercepts
                                          and serves traffic from different P2P systems. A new
                                          storage system is proposed and implemented in pCache.
                                          This storage system is optimized for storing P2P traffic,
                                          and it is shown to outperform other storage systems. In
                                          addition, a new algorithm to infer the information
                                          required to store and serve P2P traffic by the cache is
                                          proposed. Furthermore, extensive experiments to
                                          evaluate all aspects of pCache using actual
                                          implementation and real P2P traffic are presented.
  12.        Robust Feature               Feature selection often aims to select a compact feature       Computation
             Selection for                subset to build a pattern classifier with reduced              al Biology
             Microarray Data              complexity, so as to achieve improved classification           and
             Based on                     performance. From the perspective of pattern analysis,         Bioinformati
                                          producing stable or robust solution is also a desired          cs
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



             Multicriterion Fusion        property of a feature selection algorithm. However, the
                                          issue of robustness is often overlooked in feature
                                          selection. In this study, we analyze the robustness issue
                                          existing in feature selection for high-dimensional and
                                          small-sized gene-expression data, and propose to
                                          improve robustness of feature selection algorithm by
                                          using multiple feature selection evaluation criteria.
                                          Based on this idea, a multicriterion fusion-based
                                          recursive feature elimination (MCF-RFE) algorithm is
                                          developed with the goal of improving both classification
                                          performance and stability of feature selection results.
                                          Experimental studies on five gene-expression data sets
                                          show that the MCF-RFE algorithm outperforms the
                                          commonly used benchmark feature selection algorithm
                                          SVM-RFE.
  13.        Image-Based Surface          Emerging technologies for structure matching based on          Computation
             Matching Algorithm           surface descriptions have demonstrated their                   al Biology
             Oriented to                  effectiveness in many research fields. In particular, they     and
             Structural Biology           can be successfully applied to in silico studies of            Bioinformati
                                          structural biology. Protein activities, in fact, are related   cs
                                          to the external characteristics of these macromolecules
                                          and the ability to match surfaces can be important to
                                          infer information about their possible functions and
                                          interactions. In this work, we present a surface-matching
                                          algorithm, based on encoding the outer morphology of
                                          proteins in images of local description, which allows us
                                          to establish point-to-point correlations among
                                          macromolecular surfaces using image-processing
                                          functions. Discarding methods relying on biological
                                          analysis of atomic structures and expensive
                                          computational approaches based on energetic studies,
                                          this algorithm can successfully be used for
                                          macromolecular recognition by employing local surface
                                          features. Results demonstrate that the proposed
                                          algorithm can be employed both to identify surface
                                          similarities in context of macromolecular functional
                                          analysis and to screen possible protein interactions to
                                          predict pairing capability
  14.        Iris matching using          Iris recognition is one of the most widely used biometric      Computer
             multi-dimensional            technique for personal identification. This identification     Vision, IET
             artificial neural            is achieved in this work by using the concept that, the iris
             network                      patterns are statistically unique and suitable for
                                          biometric measurements. In this study, a novel method
                                          of recognition of these patterns of an iris is considered
                                          by using a multidimensional artificial neural network.
                                          The proposed technique has the distinct advantage of
                                          using the entire resized iris as an input at once. It is
                                          capable of excellent pattern recognition properties as the
                                          iris texture is unique for every person used for
                                          recognition. The system is trained and tested using two
                                          publicly available databases (CASIA and UBIRIS). The
                                          proposed approach shows significant promise and
                                          potential for improvements, compared with the other
                                          conventional matching techniques with regard to time
                                          and efficiency of results.
  15.        Real-time tracking           Many vision problems require fast and accurate tracking        Computer
             using A* heuristic           of objects in dynamic scenes. In this study, we propose        Vision, IET
             search and template          an A* search algorithm through the space of
                                          transformations for computing fast target 2D motion.
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



             updating                     Two features are combined in order to compute efficient
                                          motion: (i) Kullback??Leibler measure as heuristic to
                                          guide the search process and (ii) incorporation of target
                                          dynamics into the search process for computing the most
                                          promising search alternatives. The result value of the
                                          quality of match computed by the A* search algorithm
                                          together with the more common views of the target
                                          object are used for verifying template updates. A
                                          template will be updated only when the target object has
                                          evolved to a transformed shape dissimilar with respect
                                          to the actual shape. The study includes experimental
                                          evaluations with video streams demonstrating the
                                          effectiveness and efficiency for real-time vision based
                                          tasks with rigid and deformable objects.
  16.        Integral image               The large amount of image data from the captured three-       Computer
             compression based            dimensional integral image requires to be presented           Vision, IET
             on optical                   with adequate resolution. It is therefore necessary to
             characteristic               develop compression algorithms that take advantage of
                                          the characteristics of the recorded integral image. In this
                                          study, the authors propose a new compression method
                                          that is adapted to integral imaging. According to the
                                          optical characteristics of integral imaging, most of the
                                          information of each elemental image is overlapped with
                                          that of its adjacent elemental images. Thus, the method is
                                          to achieve image compression by taking a sample from
                                          the elemental image sequence for every m elemental
                                          image to get image compression. Experimental results
                                          that are presented to illustrate the proposed
                                          compression technique prove that the proposed
                                          technique can improve the compression ratio of integral
                                          imaging
  17.        A Variational Model          In this paper, we propose a variational formulation for       Image
             for Histogram                histogram transfer of two or more color images. We            Processing
             Transfer of Color            study an energy functional composed by three terms:
             Images                       one tends to approach the cumulative histograms of the
                                          transformed images, the other two tend to maintain the
                                          colors and geometry of the original images. By
                                          minimizing this energy, we obtain an algorithm that
                                          balances equalization and the conservation of features of
                                          the original images. As a result, they evolve while
                                          approaching an intermediate histogram between them.
                                          This intermediate histogram does not need to be
                                          specified in advance, but it is a natural result of the
                                          model. Finally, we provide experiments showing that the
                                          proposed method compares well with the state of the art.
  18.        Nonlocal Mumford-            We propose here a class of restoration algorithms for         Image
             Shah Regularizers            color images, based upon the Mumford-Shah (MS) model          Processing
             for Color Image              and nonlocal image information. The Ambrosio-
             Restoration                  Tortorelli and Shah elliptic approximations are defined
                                          to work in a small local neighborhood, which are
                                          sufficient to denoise smooth regions with sharp
                                          boundaries. However, texture is nonlocal in nature and
                                          requires semilocal/non-local information for efficient
                                          image denoising and restoration. Inspired from recent
                                          works (nonlocal means of Buades, Coll, Morel, and
                                          nonlocal total variation of Gilboa, Osher), we extend the
                                          local Ambrosio-Tortorelli and Shah approximations to
                                          MS functional (MS) to novel nonlocal formulations, for
                                          better restoration of fine structures and texture. We
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                          present several applications of the proposed nonlocal MS
                                          regularizers in image processing such as color image
                                          denoising, color image deblurring in the presence of
                                          Gaussian or impulse noise, color image inpainting, color
                                          image super-resolution, and color filter array
                                          demosaicing. In all the applications, the proposed
                                          nonlocal regularizers produce superior results over the
                                          local ones, especially in image inpainting with large
                                          missing regions. We also prove several characterizations
                                          of minimizers based upon dual norm Formulations.
  19.        A Majorize–Minimize          This paper proposes accelerated subspace optimization        Image
             Strategy for                 methods in the context of image restoration. Subspace        Processing
             Subspace                     optimization methods belong to the class of iterative
             Optimization Applied         descent algorithms for unconstrained optimization. At
             to Image Restoration         each iteration of such methods, a stepsize vector
                                          allowing the best combination of several search
                                          directions is computed through a multidimensional
                                          search. It is usually obtained by an inner iterative
                                          second-order method ruled by a stopping criterion that
                                          guarantees the convergence of the outer algorithm. As an
                                          alternative, we propose an original multidimensional
                                          search strategy based on the majorize-minimize
                                          principle. It leads to a closed-form stepsize formula that
                                          ensures the convergence of the subspace algorithm
                                          whatever the number of inner iterations. The practical
                                          efficiency of the proposed scheme is illustrated in the
                                          context of edge-preserving image restoration.
  20.        A Variational Model          We propose a variant of the Mumford-Shah model for the       Image
             for Segmentation of          segmentation of a pair of overlapping objects with           Processing
             Overlapping Objects          additive intensity value. Unlike standard segmentation
             With Additive                models, it does not only determine distinct objects in the
             Intensity Value.             image, but also recover the possibly multiple
                                          membership of the pixels. To accomplish this, some a
                                          priori knowledge about the smoothness of the object
                                          boundary is integrated into the model. Additivity is
                                          imposed through a soft constraint which allows the user
                                          to control the degree of additivity and is more robust
                                          than the hard constraint. We also show analytically that
                                          the additivity parameter can be chosen to achieve some
                                          stability conditions. To solve the optimization problem
                                          involving geometric quantities efficiently, we apply a
                                          multiphase level set method. Segmentation results on
                                          synthetic and real images validate the good performance
                                          of our model, and demonstrate the model's applicability
                                          to images with multiple channels and multiple objects.
  21.        Image Segmentation           This paper presents a multiphase fuzzy region                Image
             Using Fuzzy Region           competition model that takes into account spatial and        Processing
             Competition and              frequency information for image segmentation. In the
             Spatial/Frequency            proposed energy functional, each region is represented
             Information                  by a fuzzy membership function and a data fidelity term
                                          that measures the conformity of spatial and frequency
                                          data within each region to (generalized) Gaussian
                                          densities whose parameters are determined jointly with
                                          the segmentation process. Compared with the classical
                                          region competition model, our approach gives soft
                                          segmentation results via the fuzzy membership
                                          functions, and moreover, the use of frequency data
                                          provides additional region information that can improve
                                          the overall segmentation result. To efficiently solve the
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                          minimization of the energy functional, we adopt an
                                          alternate minimization procedure and make use of
                                          Chambolle's fast duality projection algorithm. We apply
                                          the proposed method to synthetic and natural textures
                                          as well as real-world natural images. Experimental
                                          results show that our proposed method has very
                                          promising segmentation performance compared with the
                                          current state-of-the-art approaches.
  22.        H.264 video                  More people are studying digital video stream                 Image
             watermarking with            transference via networks. However, frequent Internet         Processing,
             secret image sharing         use increases the requirement for copyright protection        IET
                                          and security. As a consequence, to prevent video streams
                                          that belong to rightful owners from being intentionally
                                          or unknowingly used by others, information protection is
                                          indispensable. The authors propose a novel method for
                                          video watermarking that is specifically designed for
                                          H.264 video. For the experiment, a low-energy signal can
                                          relatively guard against low-pass filter attacks.
                                          Conversely, a high-energy signal in the host signal can
                                          relatively guard against the high-frequency noise attack.
                                          In view of these facts, the proposed system design
                                          embedding algorithm provides high-energy and low-
                                          energy blocks. The blocks in the host image frame are
                                          divided into two different groups by estimating the block
                                          energy. The existing singular value decomposition
                                          methods were employed to calculate the watermark
                                          information. In order to enhance the security, the
                                          proposed system also employs torus automorphisms to
                                          encrypt the watermark. To achieve better robustness,
                                          the encrypted results use secret image sharing
                                          technology embedded into different I-frames in the video
                                          stream.
  23.        Rotation, scaling, and       Traditional watermarking schemes are sensitive to             Image
             translation resilient        geometric distortions, in which synchronisation for           Processing
             watermarking for             recovering embedded information is a challenging task
             images                       because of the disorder caused by rotation, scaling or
                                          translation (RST). The existing RST-resistant
                                          watermarking methods still have limitations with
                                          respect to robustness, capacity or fidelity. In this study,
                                          the authors address several major problems in RST-
                                          invariant watermarking. The first point is how to take
                                          advantage of the high RST resilience of scale-invariant
                                          feature transform (SIFT) features, which show good
                                          performance in terms of RSTresistant pattern
                                          recognition. Since many keypoint-based watermarking
                                          methods do not discuss cropping attacks, the second
                                          issue discussed in this study is how to resist cropping
                                          using a human visual system (HVS), which also helps us
                                          to eliminate computational complexity. The third issue is
                                          the investigation of an HVS-based watermarking strategy
                                          for extracting only feature points in the human attentive
                                          area. Lastly, a variable-length watermark
                                          synchronisation algorithm using dynamic programming
                                          is proposed. Experimental results show that the
                                          proposed algorithms are practical and show superior
                                          performance in comparison with many existing works in
                                          terms of watermark capacity, watermark transparency,
                                          and the resistance to RST attacks.
  24.        Improvements on              For classification problems, the generalized eigenvalue       Neural
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



             Twin Support Vector          proximal support vector machine (GEPSVM) and twin             Networks
             Machines                     support vector machine (TWSVM) are regarded as
                                          milestones in the development of the powerful SVMs, as
                                          they use the nonparallel hyperplane classifiers. In this
                                          brief, we propose an improved version, named twin
                                          bounded support vector machines (TBSVM), based on
                                          TWSVM. The significant advantage of our TBSVM over
                                          TWSVM is that the structural risk minimization principle
                                          is implemented by introducing the regularization term.
                                          This embodies the marrow of statistical learning theory,
                                          so this modification can improve the performance of
                                          classification. In addition, the successive overrelaxation
                                          technique is used to solve the optimization problems to
                                          speed up the training procedure. Experimental results
                                          show the effectiveness of our method in both
                                          computation time and classification accuracy, and
                                          therefore confirm the above conclusion further.
  25.        Feature Selection            This paper presents a new wrapper-based feature               Neural
             Using Probabilistic          selection method for support vector regression (SVR)          Networks
             Prediction of Support        using its probabilistic predictions. The method computes
             Vector Regression            the importance of a feature by aggregating the difference,
                                          over the feature space, of the conditional density
                                          functions of the SVR prediction with and without the
                                          feature. As the exact computation of this importance
                                          measure is expensive, two approximations are proposed.
                                          The effectiveness of the measure using these
                                          approximations, in comparison to several other existing
                                          feature selection methods for SVR, is evaluated on both
                                          artificial and real-world problems. The result of the
                                          experiments show that the proposed method generally
                                          performs better than, or at least as well as, the existing
                                          methods, with notable advantage when the dataset is
                                          sparse.
  26.        Energy-Efficient             In cooperative networks, transmitting and receiving           Networking
             Protocol for                 nodes recruit neighboring nodes to assist in
             Cooperative                  communication. We model a cooperative transmission
             Networks                     link in wireless networks as a transmitter cluster and a
                                          receiver cluster. We then propose a cooperative
                                          communication protocol for establishment of these
                                          clusters and for cooperative transmission of data. We
                                          derive the upper bound of the capacity of the protocol,
                                          and we analyze the end-to-end robustness of the
                                          protocol to data-packet loss, along with the tradeoff
                                          between energy consumption and error rate. The
                                          analysis results are used to compare the energy savings
                                          and the end-to-end robustness of our protocol with two
                                          non-cooperative schemes, as well as to another
                                          cooperative protocol published in the technical
                                          literature. The comparison results show that, when
                                          nodes are positioned on a grid, there is a reduction in the
                                          probability of packet delivery failure by two orders of
                                          magnitude for the values of parameters considered. Up
                                          to 80% in energy savings can be achieved for a grid
                                          topology, while for random node placement our
                                          cooperative protocol can save up to 40% in energy
                                          consumption relative to the other protocols. The
                                          reduction in error rate and the energy savings translate
                                          into increased lifetime of cooperative sensor networks.
  27.        Parametric Methods           This paper develops parametric methods to detect              Networking
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



             for Anomaly                  network anomalies using only aggregate traffic statistics,
             Detection in                 in contrast to other works requiring flow separation,
             Aggregate Traffic            even when the anomaly is a small fraction of the total
                                          traffic. By adopting simple statistical models for
                                          anomalous and background traffic in the time domain,
                                          one can estimate model parameters in real time, thus
                                          obviating the need for a long training phase or manual
                                          parameter tuning. The proposed bivariate parametric
                                          detection mechanism (bPDM) uses a sequential
                                          probability ratio test, allowing for control over the false
                                          positive rate while examining the tradeoff between
                                          detection time and the strength of an anomaly.
                                          Additionally, it uses both traffic-rate and packet-size
                                          statistics, yielding a bivariate model that eliminates most
                                          false positives. The method is analyzed using the bit-rate
                                          signal-to-noise ratio (SNR) metric, which is shown to be
                                          an effective metric for anomaly detection. The
                                          performance of the bPDM is evaluated in three ways.
                                          First, synthetically generated traffic provides for a
                                          controlled comparison of detection time as a function of
                                          the anomalous level of traffic. Second, the approach is
                                          shown to be able to detect controlled artificial attacks
                                          over the University of Southern California (USC), Los
                                          Angeles, campus network in varying real traffic mixes.
                                          Third, the proposed algorithm achieves rapid detection
                                          of real denial-of-service attacks as determined by the
                                          replay of previously captured network traces. The
                                          method developed in this paper is able to detect all
                                          attacks in these scenarios in a few seconds or less.
  28.        Peering Equilibrium          It is generally admitted that interdomain peering links       Networking
             Multipath Routing: A         represent nowadays the main bottleneck of the Internet,
             Game Theory                  particularly because of lack of coordination between
             Framework for                providers, which use independent and “selfish” routing
             Internet Peering             policies. We are interested in identifying possible “light”
             Settlements                  coordination strategies that would allow carriers to
                                          better control their peering links while preserving their
                                          independence and respective interests. We propose a
                                          robust multipath routing coordination framework for
                                          peering carriers, which relies on the multiple-exit
                                          discriminator (MED) attribute of Border Gateway
                                          Protocol (BGP) as signaling medium. Our scheme relies
                                          on a game theory modeling, with a non-cooperative
                                          potential game considering both routing and congestions
                                          costs. Peering equilibrium multipath (PEMP)
                                          coordination policies can be implemented by selecting
                                          Pareto-superior Nash equilibria at each carrier. We
                                          compare different PEMP policies to BGP Multipath
                                          schemes by emulating a realistic peering scenario. Our
                                          results show that the routing cost can be decreased by
                                          roughly 10% with PEMP. We also show that the stability
                                          of routes can be significantly improved and that
                                          congestion can be practically avoided on the peering
                                          links. Finally, we discuss practical implementation
                                          aspects and extend the model to multiple players
                                          highlighting the possible incentives for the resulting
                                          extended peering framework.
  29.        Impact of File               Traditionally, it had been assumed that the efficiency        Networking
             Arrivals and                 requirements of TCP dictate that the buffer size at the
             Departures on Buffer         router must be of the order of the bandwidth-delay (C ×
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



             Sizing in Core               RTT) product. Recently, this assumption was questioned
             Routers                      in a number of papers, and the rule was shown to be
                                          conservative for certain traffic models. In particular, by
                                          appealing to statistical multiplexing, it was shown that
                                          on a router with N long-lived connections, buffers of size
                                          O([(C × RTT)/(√N)]) or even O(1) are sufficient. In this
                                          paper, we reexamine the buffer-size requirements of
                                          core routers when flows arrive and depart. Our
                                          conclusion is as follows: If the core-to-access-speed ratio
                                          is large, then O(1) buffers are sufficient at the core
                                          routers; otherwise, larger buffer sizes do improve the
                                          flow-level performance of the users. From a modeling
                                          point of view, our analysis offers two new insights. First,
                                          it may not be appropriate to derive buffer-sizing rules by
                                          studying a network with a fixed number of users. In fact,
                                          depending upon the core-to-access-speed ratio, the
                                          buffer size itself may affect the number of flows in the
                                          system, so these two parameters (buffer size and
                                          number of flows in the system) should not be treated as
                                          independent quantities. Second, in the regime where the
                                          core-to-access-speed ratio is large, we note that the O(1)
                                          buffer sizes are sufficient for good performance and that
                                          no loss of utilization results, as previously believed.
  30.        Dynamic                      Traffic monitoring is a critical network operation for the    Networking
             measurement-aware            purpose of traffic accounting, debugging or
             routing in practice          troubleshooting, forensics, and traffic engineering.
                                          Existing techniques for traffic monitoring, however, tend
                                          to be suboptimal due to poor choice of monitor location
                                          or constantly evolving monitoring objectives and traffic
                                          characteristics. One way to counteract these limitations
                                          is to use routing as a degree of freedom to enhance
                                          monitoring efficacy, which we refer to as measurement-
                                          aware routing. Traffic sub-populations can be routed
                                          (rerouted) on the fly to optimally leverage existing
                                          monitoring infrastructures. Implementing dynamic
                                          measurementaware routing (DMR) in practice is riddled
                                          with challenges. Three major challenges are how to
                                          dynamically assess the importance of traffic flows; how
                                          to aggregate flows (and hence take a common action for
                                          them) in order to conserve routing table entries; and
                                          how to achieve traffic routing/rerouting in a manner that
                                          is least disruptive to normal network performance while
                                          maximizing the measurement utility. This article takes a
                                          closer look at these challenges and discusses how they
                                          manifest for different types of networks. Through an
                                          OpenFlow prototype, we show how DMR can be applied
                                          in enterprise networks. Using global iceberg detection
                                          and capture as a driving application, we demonstrate
                                          how our solutions successfully route suspected iceberg
                                          flows to a DPI box for further processing, while
                                          preserving balanced load distribution in the overall
                                          network.
  31.        Measurement and              Through measurement study, we discover an interesting         Networking
             diagnosis of address         phenomenon, P2P address misconfiguration, in which a
             misconfigured P2P            large number of peers send P2P file downloading
             traffic                      requests to a ??random?? target on the Internet. Through
                                          measuring three large datasets spanning four years and
                                          across five different /8 networks, we find address-
                                          misconfigured P2P traffic on average contributes 38.9
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                          percent of Internet background radiation, increasing by
                                          more than 100 percent every year. To detect and
                                          diagnose such unwanted traffic, we design the P2PScope,
                                          a measurement tool. After analyzing about 2 Tbytes of
                                          data and tracking millions of peers, we found that in all
                                          the P2P systems, address misconfiguration is caused by
                                          resource mapping contamination: the sources returned
                                          for a given file ID through P2P indexing are not valid.
                                          Different P2P systems have different reasons for such
                                          contamination. For eMule, we find that the root cause is
                                          mainly a network byte-order problem in the eMule
                                          Source Exchange protocol. For BitTorrent
                                          misconfiguration, one reason is that anti-P2P companies
                                          actively inject bogus peers into the P2P system. Another
                                          reason is that the KTorrent implementation has a byte-
                                          order problem.
  32.        Packet traffic: a good       The wireless sensor network (WSN) has emerged as a            Networking
             data source for              promising technology. In WSNs, sensor nodes are
             wireless sensor              distributedly deployed to collect interesting information
             network modeling             from the environment. Because of the mission of WSNs,
             and anomaly                  most node-wide as well as network-wide activities are
             detection                    manifested in packet traffic. As a result, packet traffic
                                          becomes a good data source for modeling sensor node as
                                          well as sensor network behaviors. In this article, the
                                          methodology of modeling node and network behavior
                                          profiles using packet traffic is exemplified. In addition,
                                          node as well as network anomalies are shown to be
                                          detectable by monitoring the evolution of node/network
                                          behavior profiles.
  33.        Experiences of               Since the early days of the Internet, network traffic         Networking
             Internet traffic             monitoring has always played a strategic role in
             monitoring with tstat        understanding and characterizing users?? activities. In
                                          this article, we present our experience in engineering
                                          and deploying Tstat, an open source passive monitoring
                                          tool that has been developed in the past 10 years. Started
                                          as a scalable tool to continuously monitor packets that
                                          flow on a link, Tstat has evolved into a complex
                                          application that gives network researchers and
                                          operators the possibility to derive extended and complex
                                          measurements thanks to advanced traffic classifiers.
                                          After discussing Tstat capabilities and internal design,
                                          we present some examples of measurements collected
                                          deploying Tstat at the edge of several ISP networks in
                                          past years. While other works report a continuous
                                          decline of P2P traffic with streaming and file hosting
                                          services rapidly increasing in popularity, the results
                                          presented in this article picture a different scenario.
                                          First, P2P decline has stopped, and in the last months of
                                          2010 there was a counter tendency to increase P2P
                                          traffic over UDP, so the common belief that UDP traffic is
                                          negligible is not true anymore. Furthermore, streaming
                                          and file hosting applications have either stabilized or are
                                          experiencing decreasing traffic shares. We then discuss
                                          the scalability issues software-based tools have to cope
                                          with when deployed in real networks, showing the
                                          importance of properly identifying bottlenecks.
  34.        Network traffic              Modern computer networks are increasingly pervasive,          Networking
             monitoring, analysis         complex, and ever-evolving due to factors like enormous
             and anomaly                  growth in the number of network users, continuous
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



             detection [Guest             appearance of network applications, increasing amount
             Editorial]                   of data transferred, and diversity of user behaviors.
                                          Understanding and measuring such a network is a
                                          difficult yet vital task for network management and
                                          diagnosis. Network traffic monitoring, analysis, and
                                          anomaly detection provide useful tools for
                                          understanding network behavior and determining
                                          network performance and reliability so as to effectively
                                          and promptly troubleshoot and resolve various issues in
                                          practice.
  35.        Scheduling Grid              Grid scheduling is essential to Quality of Service          Network and
             Tasks in Face of             provisioning as well as to efficient management of grid     Service
             Uncertain                    resources. Grid scheduling usually considers the state of   Management
             Communication                the grid resources as well application demands.
             Demands                      However, such demands are generally unknown for
                                          highly demanding applications, since these often
                                          generate data which will be transferred during their
                                          execution. Without appropriate assessment of these
                                          demands, scheduling decisions can lead to poor
                                          performance. Thus, it is of paramount importance to
                                          consider uncertainties in the formulation of a grid
                                          scheduling problem. This paper introduces the IPDT-
                                          FUZZY scheduler, a scheduler which considers the
                                          demands of grid applications with such uncertainties.
                                          The scheduler uses fuzzy optimization, and both
                                          computational and communication demands are
                                          expressed as fuzzy numbers. Its performance was
                                          evaluated, and it was shown to be attractive when
                                          communication requirements are uncertain. Its efficacy
                                          is compared, via simulation, to that of a deterministic
                                          counterpart scheduler and the results reinforce its
                                          adequacy for dealing with the lack of accuracy in the
                                          estimation of communication demands.
  36.        Improving                    Dynamic application placement for clustered web             Network and
             Application                  applications heavily influences system performance and      Service
             Placement for                quality of user experience. Existing approaches claim       Management
             Cluster-Based Web            that they strive to maximize the throughput, keep
             Applications                 resource utilization balanced across servers, and
                                          minimize the start/stop cost of application instances.
                                          However, they fail to minimize the worst case of server
                                          utilization; the load balancing performance is not
                                          optimal. What's more, some applications need to
                                          communicate with each other, which we called
                                          dependent applications; the network cost of them also
                                          should be taken into consideration. In this paper, we
                                          investigate how to minimize the resource utilization of
                                          servers in the worst case, aiming at improving load
                                          balancing among clustered servers. Our contribution is
                                          two-fold. First we propose and define a new optimization
                                          objectives: limiting the worst case of each individual
                                          server's utilization, formulated by a min-max problem. A
                                          novel framework based on binary search is proposed to
                                          detect an optimal load balancing solution. Second, we
                                          define system cost as the weighted combination of both
                                          placement change and inter-application communication
                                          cost. By maximizing the number of instances of
                                          dependent applications that reside in the same set of
                                          servers, the basic load-shifting and placement-change
                                          procedures are enhanced to minimize whole system cost.
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                          Extensive experiments have been conducted and
                                          effectively demonstrate that: 1) the proposed framework
                                          achieves a good allocation for clustered web
                                          applications. In other words, requests are evenly
                                          allocated among servers, and throughput is still
                                          maximized; 2) the total system cost maintains at a low
                                          level; 3) our algorithm has the capacity of approximating
                                          an optimal solution within polynomial time and is
                                          promising for practical implementation in real
                                          deployments.
  37.        Efficient Network            When a link or node fails, flows are detoured around the      Network and
             Modification to              failed portion, so the hop count of flows and the link load   Service
             Improve QoS                  could change dramatically as a result of the failure. As      Management
             Stability at Failure         real-time traffic such as video or voice increases on the
                                          Internet, ISPs are required to provide stable quality as
                                          well as connectivity at failures. For ISPs, how to
                                          effectively improve the stability of these qualities at
                                          failures with the minimum investment cost is an
                                          important issue, and they need to effectively select a
                                          limited number of locations to add link facilities. In this
                                          paper, efficient design algorithms to select the locations
                                          for adding link facilities are proposed and their
                                          effectiveness is evaluated using the actual backbone
                                          networks of 36 commercial ISPs.
  38.        Spectral Models for          In network measurement systems, packet sampling               Network and
             Bitrate Measurement          techniques are usually adopted to reduce the overall          Service
             from Packet Sampled          amount of data to collect and process. Being based on a       Management
             Traffic                      subset of packets, they introduce estimation errors that
                                          have to be properly counteracted by using a fine tuning
                                          of the sampling strategy and sophisticated inversion
                                          methods. This problem has been deeply investigated in
                                          the literature with particular attention to the statistical
                                          properties of packet sampling and to the recovery of the
                                          original network measurements. Herein, we propose a
                                          novel approach to predict the energy of the sampling
                                          error in the real time estimation of traffic bitrate, based
                                          on spectral analysis in the frequency domain. We start by
                                          demonstrating that the error introduced by packet
                                          sampling can be modeled as an aliasing effect in the
                                          frequency domain. Then, we derive closed-form
                                          expressions for the Signal-to-Noise Ratio (SNR) to
                                          predict the distortion of traffic bitrate estimates over
                                          time. The accuracy of the proposed SNR metric is
                                          validated by means of real packet traces. Furthermore, a
                                          comparison with respect to an analogous SNR expression
                                          derived using classic stochastic tools is proposed,
                                          showing that the frequency domain approach grants for
                                          a higher accuracy when traffic rate measurements are
                                          carried out at fine time granularity
  39.        Vulnerability                Systems proposed in academic research have so far             Security &
             Detection Systems:           failed to make a significant impact on real-world             Privacy
             Think Cyborg, Not            vulnerability detection. Most software bugs are still
             Robot                        found by methods with little input from static-analysis
                                          and verification research. These research areas could
                                          have a significant impact on software security, but first
                                          we need a shift in research goals and approaches. We
                                          need systems that incorporate human code auditors'
                                          knowledge and abilities, and we need evaluation
                                          methods that actually test proposed systems' usability in
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                          real situations. Without changes, academic research will
                                          continue to be ignored by the security community, and
                                          opportunities to build better tools for finding bugs and
                                          understanding software will be missed.
  40.        Dynamic QoS                  Service-based systems that are dynamically composed at       Software
             Management and               runtime to provide complex, adaptive functionality are       Engineering
             Optimization in              currently one of the main development paradigms in
             Service-Based                software engineering. However, the Quality of Service
             Systems                      (QoS) delivered by these systems remains an important
                                          concern, and needs to be managed in an equally adaptive
                                          and predictable way. To address this need, we introduce
                                          a novel, tool-supported framework for the development
                                          of adaptive service-based systems called QoSMOS (QoS
                                          Management and Optimization of Service-based
                                          systems). QoSMOS can be used to develop service-based
                                          systems that achieve their QoS requirements through
                                          dynamically adapting to changes in the system state,
                                          environment, and workload. QoSMOS service-based
                                          systems translate high-level QoS requirements specified
                                          by their administrators into probabilistic temporal logic
                                          formulae, which are then formally and automatically
                                          analyzed to identify and enforce optimal system
                                          configurations. The QoSMOS self-adaptation mechanism
                                          can handle reliability and performance-related QoS
                                          requirements, and can be integrated into newly
                                          developed solutions or legacy systems. The effectiveness
                                          and scalability of the approach are validated using
                                          simulations and a set of experiments based on an
                                          implementation of an adaptive service-based system for
                                          remote medical assistance.
  41.        Seeking Quality of           Ranking and optimization of web service compositions         Knowledge
             Web Service                  represent challenging areas of research with significant     and Data
             Composition in a             implications for the realization of the “Web of Services”    Engineering
             Semantic Dimension           vision. “Semantic web services” use formal semantic
                                          descriptions of web service functionality and interface to
                                          enable automated reasoning over web service
                                          compositions. To judge the quality of the overall
                                          composition, for example, we can start by calculating the
                                          semantic similarities between outputs and inputs of
                                          connected constituent services, and aggregate these
                                          values into a measure of semantic quality for the
                                          composition. This paper takes a specific interest in
                                          combining semantic and nonfunctional criteria such as
                                          quality of service (QoS) to evaluate quality in web
                                          services composition. It proposes a novel and extensible
                                          model balancing the new dimension of semantic quality
                                          (as a functional quality metric) with a QoS metric, and
                                          using them together as ranking and optimization criteria.
                                          It also demonstrates the utility of Genetic Algorithms to
                                          allow optimization within the context of a large number
                                          of services foreseen by the “Web of Services” vision. We
                                          test the performance of the overall approach using a set
                                          of simulation experiments, and discuss its advantages
                                          and weaknesses.
  42.        Mining Cluster-Based         Researches on Location-Based Service (LBS) have been         Knowledge
             Temporal Mobile              emerging in recent years due to a wide range of potential    and Data
             Sequential Patterns          applications. One of the active topics is the mining and     Engineering
             in Location-Based            prediction of mobile movements and associated
             Service                      transactions. Most of existing studies focus on
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



             Environments                 discovering mobile patterns from the whole logs.
                                          However, this kind of patterns may not be precise
                                          enough for predictions since the differentiated mobile
                                          behaviors among users and temporal periods are not
                                          considered. In this paper, we propose a novel algorithm,
                                          namely, Cluster-based Temporal Mobile Sequential
                                          Pattern Mine (CTMSP-Mine), to discover the Cluster-
                                          based Temporal Mobile Sequential Patterns (CTMSPs).
                                          Moreover, a prediction strategy is proposed to predict
                                          the subsequent mobile behaviors. In CTMSP-Mine, user
                                          clusters are constructed by a novel algorithm named
                                          Cluster-Object-based Smart Cluster Affinity Search
                                          Technique (CO-Smart-CAST) and similarities between
                                          users are evaluated by the proposed measure, Location-
                                          Based Service Alignment (LBS-Alignment). Meanwhile, a
                                          time segmentation approach is presented to find
                                          segmenting time intervals where similar mobile
                                          characteristics exist. To our best knowledge, this is the
                                          first work on mining and prediction of mobile behaviors
                                          with considerations of user relations and temporal
                                          property simultaneously. Through experimental
                                          evaluation under various simulated conditions, the
                                          proposed methods are shown to deliver excellent
                                          performance.
  43.        Locally Consistent           Previous studies have demonstrated that document             Knowledge
             Concept                      clustering performance can be improved significantly in      and Data
             Factorization for            lower dimensional linear subspaces. Recently, matrix         Engineering
             Document Clustering          factorization-based techniques, such as Nonnegative
                                          Matrix Factorization (NMF) and Concept Factorization
                                          (CF), have yielded impressive results. However, both of
                                          them effectively see only the global euclidean geometry,
                                          whereas the local manifold geometry is not fully
                                          considered. In this paper, we propose a new approach to
                                          extract the document concepts which are consistent with
                                          the manifold geometry such that each concept
                                          corresponds to a connected component. Central to our
                                          approach is a graph model which captures the local
                                          geometry of the document submanifold. Thus, we call it
                                          Locally Consistent Concept Factorization (LCCF). By
                                          using the graph Laplacian to smooth the document-to-
                                          concept mapping, LCCF can extract concepts with respect
                                          to the intrinsic manifold structure and thus documents
                                          associated with the same concept can be well clustered.
                                          The experimental results on TDT2 and Reuters-21578
                                          have shown that the proposed approach provides a
                                          better representation and achieves better clustering
                                          results in terms of accuracy and mutual information.


  44.        Knowledge                    Service mashup is the act of integrating the resulting       Knowledge
             Discovery in Services        data of two complementary software services into a           and Data
             (KDS): Aggregating           common picture. Such an approach is promising with           Engineering
             Software Services to         respect to the discovery of new types of knowledge.
             Discover Enterprise          However, before service mashup routines can be
             Mashups                      executed, it is necessary to predict which services (of an
                                          open repository) are viable candidates. Similar to
                                          Knowledge Discovery in Databases (KDD), we introduce
                                          the Knowledge Discovery in Services (KDS) process that
                                          identifies mashup candidates. In this work, the KDS
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                          process is specialized to address a repository of open
                                          services that do not contain semantic annotations. In
                                          these situations, specialized techniques are required to
                                          determine equivalences among open services with
                                          reasonable precision. This paper introduces a bottom-up
                                          process for KDS that adapts to the environment of
                                          services for which it operates. Detailed experiments are
                                          discussed that evaluate KDS techniques on an open
                                          repository of services from the Internet and on a
                                          repository of services created in a controlled
                                          environment.


  45.        Design and                   The intrusion response component of an overall                Knowledge
             Implementation of            intrusion detection system is responsible for issuing a       and Data
             an Intrusion                 suitable response to an anomalous request. We propose         Engineering
             Response System for          the notion of database response policies to support our
             Relational Databases         intrusion response system tailored for a DBMS. Our
                                          interactive response policy language makes it very easy
                                          for the database administrators to specify appropriate
                                          response actions for different circumstances depending
                                          upon the nature of the anomalous request. The two main
                                          issues that we address in context of such response
                                          policies are that of policy matching, and policy
                                          administration. For the policy matching problem, we
                                          propose two algorithms that efficiently search the policy
                                          database for policies that match an anomalous request.
                                          We also extend the PostgreSQL DBMS with our policy
                                          matching mechanism, and report experimental results.
                                          The experimental evaluation shows that our techniques
                                          are very efficient. The other issue that we address is that
                                          of administration of response policies to prevent
                                          malicious modifications to policy objects from legitimate
                                          users. We propose a novel Joint Threshold
                                          Administration Model (JTAM) that is based on the
                                          principle of separation of duty. The key idea in JTAM is
                                          that a policy object is jointly administered by at least k
                                          database administrator (DBAs), that is, any modification
                                          made to a policy object will be invalid unless it has been
                                          authorized by at least k DBAs. We present design details
                                          of JTAM which is based on a cryptographic threshold
                                          signature scheme, and show how JTAM prevents
                                          malicious modifications to policy objects from
                                          authorized users. We also implement JTAM in the
                                          PostgreSQL DBMS, and report experimental results on
                                          the efficiency of our techniques.


  46.        Automatic Discovery          An individual is typically referred by numerous name          Knowledge
             of Personal Name             aliases on the web. Accurate identification of aliases of a   and Data
             Aliases from the Web         given person name is useful in various web related tasks      Engineering
                                          such as information retrieval, sentiment analysis,
                                          personal name disambiguation, and relation extraction.
                                          We propose a method to extract aliases of a given
                                          personal name from the web. Given a personal name, the
                                          proposed method first extracts a set of candidate aliases.
                                          Second, we rank the extracted candidates according to
                                          the likelihood of a candidate being a correct alias of the
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

        http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                          given name. We propose a novel, automatically extracted
                                          lexical pattern-based approach to efficiently extract a
                                          large set of candidate aliases from snippets retrieved
                                          from a web search engine. We define numerous ranking
                                          scores to evaluate candidate aliases using three
                                          approaches: lexical pattern frequency, word co-
                                          occurrences in an anchor text graph, and page counts on
                                          the web. To construct a robust alias detection system, we
                                          integrate the different ranking scores into a single
                                          ranking function using ranking support vector machines.
                                          We evaluate the proposed method on three data sets: an
                                          English personal names data set, an English place names
                                          data set, and a Japanese personal names data set. The
                                          proposed method outperforms numerous baselines and
                                          previously proposed name alias extraction methods,
                                          achieving a statistically significant mean reciprocal rank
                                          (MRR) of 0.67. Experiments carried out using location
                                          names and Japanese personal names suggest the
                                          possibility of extending the proposed method to extract
                                          aliases for different types of named entities, and for
                                          different languages. Moreover, the aliases extracted
                                          using the proposed method are successfully utilized in an
                                          information retrieval task and improve recall by 20
                                          percent in a relation-detection task.


  47.        Classification and           Most existing data stream classification techniques             Knowledge
             Novel Class Detection        ignore one important aspect of stream data: arrival of a        and Data
             in Concept-Drifting          novel class. We address this issue and propose a data           Engineering
             Data Streams under           stream classification technique that integrates a novel
             Time Constraints             class detection mechanism into traditional classifiers,
                                          enabling automatic detection of novel classes before the
                                          true labels of the novel class instances arrive. Novel class
                                          detection problem becomes more challenging in the
                                          presence of concept-drift, when the underlying data
                                          distributions evolve in streams. In order to determine
                                          whether an instance belongs to a novel class, the
                                          classification model sometimes needs to wait for more
                                          test instances to discover similarities among those
                                          instances. A maximum allowable wait time Tc is imposed
                                          as a time constraint to classify a test instance.
                                          Furthermore, most existing stream classification
                                          approaches assume that the true label of a data point can
                                          be accessed immediately after the data point is classified.
                                          In reality, a time delay Tl is involved in obtaining the true
                                          label of a data point since manual labeling is time
                                          consuming. We show how to make fast and correct
                                          classification decisions under these constraints and
                                          apply them to real benchmark data. Comparison with
                                          state-of-the-art stream classification techniques prove
                                          the superiority of our approach.
  48.        A Machine Learning           The Machine Learning (ML) field has gained its                  Knowledge
             Approach for                 momentum in almost any domain of research and just              and Data
             Identifying Disease-         recently has become a reliable tool in the medical              Engineering
             Treatment Relations          domain. The empirical domain of automatic learning is
             in Short Texts               used in tasks such as medical decision support, medical
                                          imaging, protein-protein interaction, extraction of
                                          medical knowledge, and for overall patient management
                                          care. ML is envisioned as a tool by which computer-
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

      http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                        based systems can be integrated in the healthcare field in
                                        order to get a better, more efficient medical care. This
                                        paper describes a ML-based methodology for building an
                                        application that is capable of identifying and
                                        disseminating healthcare information. It extracts
                                        sentences from published medical papers that mention
                                        diseases and treatments, and identifies semantic
                                        relations that exist between diseases and treatments.
                                        Our evaluation results for these tasks show that the
                                        proposed methodology obtains reliable outcomes that
                                        could be integrated in an application to be used in the
                                        medical care domain. The potential value of this paper
                                        stands in the ML settings that we propose and in the fact
                                        that we outperform previous results on the same data
                                        set.
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

       http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028




                                                MATLAB                     2011
  1.    Face Recognition by                  Information jointly contained in image space, scale and               Image
        Exploring Information                orientation domains can provide rich important clues not seen in      Processing
        Jointly in Space, Scale and          either individual of these domains. The position, spatial
                                             frequency and orientation selectivity properties are believed to
        Orientation
                                             have an important role in visual perception. This paper proposes
                                             a novel face representation and recognition approach by
                                             exploring information jointly in image space, scale and
                                             orientation domains. Specifically, the face image is first
                                             decomposed into different scale and orientation responses by
                                             convolving multiscale and multi-orientation Gabor filters.
                                             Second, local binary pattern analysis is used to describe the
                                             neighboring relationship not only in image space, but also in
                                             different scale and orientation responses. This way, information
                                             from different domains is explored to give a good face
                                             representation for recognition. Discriminant classification is then
                                             performed based upon weighted histogram intersection or
                                             conditional mutual information with linear discriminant analysis
                                             techniques. Extensive experimental results on FERET, AR, and
                                             FRGC ver 2.0 databases show the significant advantages of the
                                             proposed method over the existing ones.

  2.    Detection of Architectural           We present methods for the detection of sites of architectural        Image
        Distortion in Prior                  distortion in prior mammograms of interval-cancer cases. We           Processing
        Mammograms                           hypothesize that screening mammograms obtained prior to the
                                             detection of cancer could contain subtle signs of early stages of
                                             breast cancer, in particular, architectural distortion. The
                                             methods are based upon Gabor filters, phase portrait analysis, a
                                             novel method for the analysis of the angular spread of power,
                                             fractal analysis, Laws' texture energy measures derived from
                                             geometrically transformed regions of interest (ROIs), and
                                             Haralick's texture features. With Gabor filters and phase portrait
                                             analysis, 4224 ROIs were automatically obtained from 106 prior
                                             mammograms of 56 interval-cancer cases, including 301 true-
                                             positive ROIs related to architectural distortion, and from 52
                                             mammograms of 13 normal cases. For each ROI, the fractal
                                             dimension, the entropy of the angular spread of power, 10 Laws'
                                             measures, and Haralick's 14 features were computed. The areas
                                             under the receiver operating characteristic curves obtained
                                             using the features selected by stepwise logistic regression and
                                             the leave-one-ROI-out method are 0.76 with the Bayesian
                                             classifier, 0.75 with Fisher linear discriminant analysis, and 0.78
                                             with a single-layer feed-forward neural network. Free-response
                                             receiver operating characteristics indicated sensitivities of 0.80
                                             and 0.90 at 5.8 and 8.1 false positives per image, respectively,
                                             with the Bayesian classifier and the leave-one-image-out
                                             method.
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

       http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



  3.    Enhanced Assessment of the           With the widespread use of digital cameras, freehand wound            Image
        Wound-Healing Process by             imaging has become common practice in clinical settings. There        Processing
        Accurate Multiview Tissue            is however still a demand for a practical tool for accurate wound
        Classification                       healing assessment, combining dimensional measurements and
                                             tissue classification in a single user-friendly system. We achieved
                                             the first part of this objective by computing a 3-D model for
                                             wound measurements using uncalibrated vision techniques. We
                                             focus here on tissue classification from color and texture region
                                             descriptors computed after unsupervised segmentation. Due to
                                             perspective distortions, uncontrolled lighting conditions and
                                             view points, wound assessments vary significantly between
                                             patient examinations. The main contribution of this paper is to
                                             overcome this drawback with a multiview strategy for tissue
                                             classification, relying on a 3-D model onto which tissue labels are
                                             mapped and classification results merged. The experimental
                                             classification tests demonstrate that enhanced repeatability and
                                             robustness are obtained and that metric assessment is achieved
                                             through real area and volume measurements and wound outline
                                             extraction. This innovative tool is intended for use not only in
                                             therapeutic follow-up in hospitals but also for telemedicine
                                             purposes and clinical research, where repeatability and accuracy
                                             of wound assessment are critical.

  4.         A New Supervised                This paper presents a new supervised method for blood vessel          Image
             Method for Blood Vessel         detection in digital retinal images. This method uses a neural        Processing
             Segmentation in Retinal         network (NN) scheme for pixel classification and computes a 7-D
             Images by Using Gray-
                                             vector composed of gray-level and moment invariants-based
             Level and Moment
             Invariants-Based Features       features for pixel representation. The method was evaluated on
                                             the publicly available DRIVE and STARE databases, widely used
                                             for this purpose, since they contain retinal images where the
                                             vascular structure has been precisely marked by experts.
                                             Method performance on both sets of test images is better than
                                             other existing solutions in literature. The method proves
                                             especially accurate for vessel detection in STARE images. Its
                                             application to this database (even when the NN was trained on
                                             the DRIVE database) outperforms all analyzed segmentation
                                             approaches. Its effectiveness and robustness with different
                                             image conditions, together with its simplicity and fast
                                             implementation, make this blood vessel segmentation proposal
                                             suitable for retinal image computer analyses such as automated
                                             screening for early diabetic retinopathy detection.

  5.    Graph Run-Length Matrices            The histopathological examination of tissue specimens is              Image
        for Histopathological Image          essential for cancer diagnosis and grading. However, this             Processing

        Segmentation                         examination is subject to a considerable amount of observer
                                             variability as it mainly relies on visual interpretation of
                                             pathologists. To alleviate this problem, it is very important to
                                             develop computational quantitative tools, for which image
                                             segmentation constitutes the core step. In this paper, we
                                             introduce an effective and robust algorithm for the
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

       http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                             segmentation of histopathological tissue images. This algorithm
                                             incorporates the background knowledge of the tissue
                                             organization into segmentation. For this purpose, it quantifies
                                             spatial relations of cytological tissue components by
                                             constructing a graph and uses this graph to define new texture
                                             features for image segmentation. This new texture definition
                                             makes use of the idea of gray-level run-length matrices.
                                             However, it considers the runs of cytological components on a
                                             graph to form a matrix, instead of considering the runs of pixel
                                             intensities. Working with colon tissue images, our experiments
                                             demonstrate that the texture features extracted from “graph
                                             run-length matrices” lead to high segmentation accuracies, also
                                             providing a reasonable number of segmented regions.
                                             Compared with four other segmentation algorithms, the results
                                             show that the proposed algorithm is more effective in
                                             histopathological image segmentation.

  6.    X-ray Categorization and             In this study we present an efficient image categorization and        Image
        Retrieval on the Organ and           retrieval system applied to medical image databases, in               Processing

        Pathology Level, Using               particular large radiograph archives. The methodology is based
                                             on local patch representation of the image content, using a “bag
        Patch-Based Visual Words
                                             of visual words” approach. We explore the effects of various
                                             parameters on system performance, and show best results using
                                             dense sampling of simple features with spatial content, and a
                                             nonlinear kernel-based support vector machine (SVM) classifier.
                                             In a recent international competition the system was ranked first
                                             in discriminating orientation and body regions in X-ray images. In
                                             addition to organ-level discrimination, we show an application to
                                             pathology-level categorization of chest X-ray data, the most
                                             popular examination in radiology. The system discriminates
                                             between healthy and pathological cases, and is also shown to
                                             successfully identify specific pathologies in a set of chest
                                             radiographs taken from a routine hospital examination. This is a
                                             first step towards similarity-based categorization, which has a
                                             major clinical implications for computer-assisted diagnostics

  7.    Standard Deviation for               This letter proposes a new technique of restoring images              Image
        Obtaining the Optimal                distorted by random-valued impulse noise. The detection               Processing
        Direction in the Removal of          process is based on finding the optimum direction, by calculating
        Impulse Noise                        the standard deviation in different directions in the filtering
                                             window. The tested pixel is deemed original if it is similar to the
                                             pixels in the optimum direction. Extensive simulations prove that
                                             the proposed technique has superior performance, when
                                             compared to other existing methods, especially at high noise
                                             rates.

  8.    Removal of High Density Salt         A modified decision based unsymmetrical trimmed median filter         Image
        and Pepper Noise Through             algorithm for the restoration of gray scale, and color images that    Processing
        Modified Decision Based              are highly corrupted by salt and pepper noise is proposed in this
        Unsymmetric Trimmed                  paper. The proposed algorithm replaces the noisy pixel by
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

       http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



        Median Filter                        trimmed median value when other pixel values, 0's and 255's are
                                             present in the selected window and when all the pixel values are
                                             0's and 255's then the noise pixel is replaced by mean value of
                                             all the elements present in the selected window. This proposed
                                             algorithm shows better results than the Standard Median Filter
                                             (MF), Decision Based Algorithm (DBA), Modified Decision Based
                                             Algorithm (MDBA), and Progressive Switched Median Filter
                                             (PSMF). The proposed algorithm is tested against different
                                             grayscale and color images and it gives better Peak Signal-to-
                                             Noise Ratio (PSNR) and Image Enhancement Factor (IEF).

  9.    IMAGE Resolution                     In this correspondence, the authors propose an image resolution        Image
        Enhancement by Using                 enhancement technique based on interpolation of the high               Processing
        Discrete and Stationary              frequency subband images obtained by discrete wavelet
        Wavelet Decomposition                transform (DWT) and the input image. The edges are enhanced
                                             by introducing an intermediate stage by using stationary wavelet
                                             transform (SWT). DWT is applied in order to decompose an input
                                             image into different subbands. Then the high frequency
                                             subbands as well as the input image are interpolated. The
                                             estimated high frequency subbands are being modified by using
                                             high frequency subband obtained through SWT. Then all these
                                             subbands are combined to generate a new high resolution image
                                             by using inverse DWT (IDWT). The quantitative and visual results
                                             are showing the superiority of the proposed technique over the
                                             conventional and state-of-art image resolution enhancement
                                             techniques.

  10. Automatic Optic Disc                   Under the framework of computer-aided eye disease diagnosis,           Image
      Detection From Retinal                 this paper presents an automatic optic disc (OD) detection             Processing
      Images by a Line Operator              technique. The proposed technique makes use of the unique
                                             circular brightness structure associated with the OD, i.e., the OD
                                             usually has a circular shape and is brighter than the surrounding
                                             pixels whose intensity becomes darker gradually with their
                                             distances from the OD center. A line operator is designed to
                                             capture such circular brightness structure, which evaluates the
                                             image brightness variation along multiple line segments of
                                             specific orientations that pass through each retinal image pixel.
                                             The orientation of the line segment with the
                                             minimum/maximum variation has specific pattern that can be
                                             used to locate the OD accurately. The proposed technique has
                                             been tested over four public datasets that include 130, 89, 40,
                                             and 81 images of healthy and pathological retinas, respectively.
                                             Experiments show that the designed line operator is tolerant to
                                             different types of retinal lesion and imaging artifacts, and an
                                             average OD detection accuracy of 97.4% is obtained.

  11. Wavelet-Based Image Texture            In this letter, we propose an efficient one-nearest-neighbor           Image
      Classification Using Local             classifier of texture via the contrast of local energy histograms of   Processing
      Energy Histograms                      all the wavelet subbands between an input texture patch and
                                             each sample texture patch in a given training set. In particular,
#241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024

      http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028



                                            the contrast is realized with a discrepancy measure which is just
                                            a sum of symmetrized Kullback-Leibler divergences between the
                                            input and sample local energy histograms on all the wavelet
                                            subbands. It is demonstrated by various experiments that our
                                            proposed method obtains a satisfactory texture classification
                                            accuracy in comparison with several current state-of-the-art
                                            texture classification approaches.

  12. A Ringing-Artifact                    This paper proposes a new ringing-artifact reduction                Image
       Reduction Method for                 method for image resizing in a block discrete cosine                Processing

       Block-DCT-Based Image                transform (DCT) domain. The proposed method reduces
       Resizing                             ringing artifacts without further blurring, whereas previous
                                            approaches must find a compromise between blurring and
                                            ringing artifacts. The proposed method consists of DCT-
                                            domain filtering and image-domain post-processing, which
                                            reduces ripples on smooth regions as well as overshoot
                                            near strong edges. By generating a mask map of the
                                            overshoot regions, we combine a ripple-reduced image
                                            and an overshoot-reduced image according to the mask
                                            map in the image domain to obtain a ringing-artifact
                                            reduced image. The experimental results show that the
                                            proposed method is computationally faster and produces
                                            visually finer images than previous ringing-artifact
                                            reduction approaches.

More Related Content

PDF
Effective Streaming of Clustered Sensor Data in Harsh Environment
PDF
EFFICIENT REBROADCASTING USING TRUSTWORTHINESS OF NODE WITH NEIGHBOUR KNOWLED...
PDF
(Paper) A Method for Overlay Network Latency Estimation from Previous Observa...
PDF
Delay jitter control for real time communication
PDF
Abstract
PDF
M.Phil Computer Science Mobile Computing Projects
PDF
IRJET- Energy Optimization in Wireless Sensor Networks using Trust-Aware Rout...
Effective Streaming of Clustered Sensor Data in Harsh Environment
EFFICIENT REBROADCASTING USING TRUSTWORTHINESS OF NODE WITH NEIGHBOUR KNOWLED...
(Paper) A Method for Overlay Network Latency Estimation from Previous Observa...
Delay jitter control for real time communication
Abstract
M.Phil Computer Science Mobile Computing Projects
IRJET- Energy Optimization in Wireless Sensor Networks using Trust-Aware Rout...

What's hot (18)

PDF
A Cross-Layer Based Multipath Routing Protocol To Improve QoS In Mobile Adhoc...
PDF
M.Phil Computer Science Networking Projects
PDF
IMPROVING PACKET DELIVERY RATIO WITH ENHANCED CONFIDENTIALITY IN MANET
DOCX
Literature review report
PDF
Hexagonal based Clustering for Reducing Rebroadcasts in Mobile Ad Hoc Networks
PDF
Project report on An Energy Efficient Routing Protocol in Wireless Sensor Net...
PDF
Implementation of energy efficient coverage aware routing protocol for wirele...
PDF
A2546035115
PDF
M phil-computer-science-networking-projects
PDF
Ijarcet vol-2-issue-7-2311-2318
PDF
Review on buffer management schemes for packet queues in wired & wireless net...
PDF
REAL-TIME ROUTING PROTOCOLS FOR WIRELESS SENSOR NETWORKS: A SURVEY
DOCX
DOTNET 2013 IEEE MOBILECOMPUTING PROJECT Optimal multicast capacity and delay...
PDF
A QUALITY OF SERVICE ARCHITECTURE FOR RESOURCE PROVISIONING AND RATE CONTROL ...
PDF
FINAL PROJECT REPORT
PDF
S09 S02 P04
PDF
Dynamic Routing for Data Integrity and Delay Differentiated Services in Wirel...
PDF
A Novel Message Driven Local Repair Algorithm for MANET
A Cross-Layer Based Multipath Routing Protocol To Improve QoS In Mobile Adhoc...
M.Phil Computer Science Networking Projects
IMPROVING PACKET DELIVERY RATIO WITH ENHANCED CONFIDENTIALITY IN MANET
Literature review report
Hexagonal based Clustering for Reducing Rebroadcasts in Mobile Ad Hoc Networks
Project report on An Energy Efficient Routing Protocol in Wireless Sensor Net...
Implementation of energy efficient coverage aware routing protocol for wirele...
A2546035115
M phil-computer-science-networking-projects
Ijarcet vol-2-issue-7-2311-2318
Review on buffer management schemes for packet queues in wired & wireless net...
REAL-TIME ROUTING PROTOCOLS FOR WIRELESS SENSOR NETWORKS: A SURVEY
DOTNET 2013 IEEE MOBILECOMPUTING PROJECT Optimal multicast capacity and delay...
A QUALITY OF SERVICE ARCHITECTURE FOR RESOURCE PROVISIONING AND RATE CONTROL ...
FINAL PROJECT REPORT
S09 S02 P04
Dynamic Routing for Data Integrity and Delay Differentiated Services in Wirel...
A Novel Message Driven Local Repair Algorithm for MANET
Ad

Similar to 2011 ieee projects (20)

PDF
A Fault-tolerant Switch for Next Generation Computer Networks
PDF
Java networking 2012 ieee projects @ Seabirds ( Chennai, Bangalore, Hyderabad...
PDF
Project titles abstract_2012
PDF
Jz2417141717
PDF
AN EFFICIENT BANDWIDTH OPTIMIZATION AND MINIMIZING ENERGY CONSUMPTION UTILIZI...
PDF
Gk2411581160
PDF
A Survey On Secure Cooperative Bait Detection Approach For...
PDF
Fw3111471149
PDF
PDF
PDF
63151777 core-design
PDF
Brief vss
DOCX
Ns2 2015 2016 ieee project list-(v)_with abstract(S3 Infotech:9884848198)
PDF
.Net project titles 2011, Real time projects in .net, Java Final year project...
PDF
IEEE final year projects in chennai,MATLAB projects in chennai,Engineering P...
PDF
Mobile computing projects in chennai,Ns2 projects in chennai,.Net projects in...
PDF
Java J2EE project titles, .NET project titles with abstract,Java J2EE projec...
PDF
IEEE Network security projects in chennai,IEEE 2011 titles abstract,NS2 proje...
PDF
Secure computing projects,Mobile computing projects in chennai,IEEE Network s...
A Fault-tolerant Switch for Next Generation Computer Networks
Java networking 2012 ieee projects @ Seabirds ( Chennai, Bangalore, Hyderabad...
Project titles abstract_2012
Jz2417141717
AN EFFICIENT BANDWIDTH OPTIMIZATION AND MINIMIZING ENERGY CONSUMPTION UTILIZI...
Gk2411581160
A Survey On Secure Cooperative Bait Detection Approach For...
Fw3111471149
63151777 core-design
Brief vss
Ns2 2015 2016 ieee project list-(v)_with abstract(S3 Infotech:9884848198)
.Net project titles 2011, Real time projects in .net, Java Final year project...
IEEE final year projects in chennai,MATLAB projects in chennai,Engineering P...
Mobile computing projects in chennai,Ns2 projects in chennai,.Net projects in...
Java J2EE project titles, .NET project titles with abstract,Java J2EE projec...
IEEE Network security projects in chennai,IEEE 2011 titles abstract,NS2 proje...
Secure computing projects,Mobile computing projects in chennai,IEEE Network s...
Ad

Recently uploaded (20)

PPTX
Cloud computing and distributed systems.
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Encapsulation theory and applications.pdf
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Electronic commerce courselecture one. Pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Machine learning based COVID-19 study performance prediction
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
Big Data Technologies - Introduction.pptx
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Cloud computing and distributed systems.
20250228 LYD VKU AI Blended-Learning.pptx
Understanding_Digital_Forensics_Presentation.pptx
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
The AUB Centre for AI in Media Proposal.docx
Network Security Unit 5.pdf for BCA BBA.
Encapsulation theory and applications.pdf
Chapter 3 Spatial Domain Image Processing.pdf
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Electronic commerce courselecture one. Pdf
Spectral efficient network and resource selection model in 5G networks
Review of recent advances in non-invasive hemoglobin estimation
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Programs and apps: productivity, graphics, security and other tools
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Machine learning based COVID-19 study performance prediction
Unlocking AI with Model Context Protocol (MCP)
Reach Out and Touch Someone: Haptics and Empathic Computing
Big Data Technologies - Introduction.pptx
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx

2011 ieee projects

  • 1. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 S.NO TITLE -2010 ABSTRACT DOMAIN PLATFORM 1. A Machine Learning TCP throughput prediction is an important capability for Networking .net Approach to TCP networks where multiple paths exist between data Throughput senders and receivers. In this paper, we describe a new Prediction lightweight method for TCP throughput prediction. Our predictor uses Support Vector Regression (SVR); prediction is based on both prior file transfer history and measurements of simple path properties. We evaluate our predictor in a laboratory setting where ground truth can be measured with perfect accuracy. We report the performance of our predictor for oracular and practical measurements of path properties over a wide range of traffic conditions and transfer sizes. For bulk transfers in heavy traffic using oracular measurements, TCP throughput is predicted within 10% of the actual value 87% of the time, representing nearly a threefold improvement in accuracy over prior history-based methods. For practical measurements of path properties, predictions can be made within 10% of the actual value nearly 50% of the time, approximately a 60% improvement over history-based methods, and with much lower measurement traffic overhead. We implement our predictor in a tool called PathPerf, test it in the wide area, and show that PathPerf predicts TCP throughput accurately over diverse wide area paths. 2. Feedback-Based A framework for designing feedback-based scheduling .net Scheduling for Load- algorithms is proposed for elegantly solving the Balanced Two-Stage notorious packet missequencing problem of a load- Switches balanced switch. Unlike existing approaches, we show that the efforts made in load balancing and keeping packets in order can complement each other. Specifically, at each middle-stage port between the two switch fabrics of a load-balanced switch, only a single-packet buffer for each virtual output queueing (VOQ) is required. Although packets belonging to the same flow pass through different middle-stage VOQs, the delays they experience at different middle-stage ports will be identical. This is made possible by properly selecting and coordinating the two sequences of switch configurations to form a joint sequence with both staggered symmetry property and in-order packet delivery property. Based on the staggered symmetry property, an efficient feedback mechanism is designed to allow the right middle-stage port occupancy vector to be delivered to the right input port at the right time. As a result, the performance of load balancing as well as the switch throughput is significantly improved. We further extend this feedback mechanism to support the multicabinet implementation of a load-balanced switch, where the propagation delay between switch linecards and switch fabrics is nonnegligible. As compared to the existing load-balanced switch architectures and scheduling algorithms, our solutions impose a modest requirement on switch hardware, but consistently yield better delay-throughput performance. Last but not least, some extensions and refinements are made to address the scalability, implementation, and fairness issues of our solutions.
  • 2. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 3. Trust management in In this paper, we propose a human-based model which .net mobile ad hoc networks builds a trust relationship between nodes in an ad hoc using a scalable maturity network. The trust is based on previous individual based model experiences and on the recommendations of others. We present the Recommendation Exchange Protocol (REP) which allows nodes to exchange recommendations about their neighbors. Our proposal does not require disseminating the trust information over the entire network. Instead, nodes only need to keep and exchange trust information about nodes within the radio range. Without the need for a global trust knowledge, our proposal scales well for large networks while still reducing the number of exchanged messages and therefore the energy consumption. In addition, we mitigate the effect of colluding attacks composed of liars in the network. A key concept we introduce is the relationship maturity, which allows nodes to improve the efficiency of the proposed model for mobile scenarios. We show the correctness of our model in a single-hop network through simulations. We also extend the analysis to mobile multihop networks, showing the benefits of the maturity relationship concept. We evaluate the impact of malicious nodes that send false recommendations to degrade the efficiency of the trust model. At last, we analyze the performance of the REP protocol and show its scalability. We show that our implementation of REP can significantly reduce the number messages. 4. Online social networks OSNs applications, it is a location-based social network Network .net services, security and privacy of OSNs, and human mobility models based on social network OSNs online service site focuses of social networks or social relations among people, e.g., who share interests and activities. A social network service essentially consists of a representation of each user (often a profile), his/her social links, and a variety of additional services. Most social network services are web based and provide means for users to interact over the internet, such as e-mail and instant messaging. Although online community services are sometimes considered as a social network online community services are group- centered. Social networking sites allow users to share ideas, activities, events, and interests within their individual networks. 5. SYNCHRONIZATION OF File synchronization in computing is the process of LOCAL DESKTOP TO making sure that files in two or more locations are INTERNET USING FILE updated through certain rules. In one-way file TRANSFER PROTOCOL synchronization, also called mirroring, updated files are copied from a 'source' location to one or more 'target' locations, but no files are copied back to the source location. In two-way file synchronization, updated files are copied in both directions, usually with the purpose of keeping the two locations identical to each other. In this article, the term synchronization refers exclusively to two-way file synchronization.
  • 3. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 6. Intrusion Detection for Providing security in a distributed system requires more Grid and Cloud than user authentication with passwords or digital Computing certificates and confidentiality in data transmission. The Grid and Cloud Computing Intrusion Detection System integrates knowledge and behavior analysis to detect intrusions. 7. Adaptive Physical Transmit power and carrier sense threshold are key Carrier Sense in MAC/PHY parameters in carrier sense multiple access Topology-Controlled (CSMA) wireless networks. Transmit power control has Wireless Networks been extensively studied in the context of topology control. However, the effect of carrier sense threshold on topology control has not been properly investigated in spite of its crucial role. Our key motivation is that the performance of a topology-controlled network may become worse than that of a network without any topology control unless carrier sense threshold is properly chosen. In order to remedy this deficiency of conventional topology control, we present a framework on how to incorporate physical carrier sense into topology control. We identify that joint control of transmit power and carrier sense threshold can be efficiently divided into topology control and carrier sense adaptation. We devise a distributed carrier sense update algorithm (DCUA), by which each node drives its carrier sense threshold toward a desirable operating point in a fully distributed manner. We derive a sufficient condition for the convergence of DCUA. To demonstrate the utility of integrating physical carrier sense into topology control, we equip a localized topology control algorithm, LMST, with the capability of DCUA. Simulation studies show that LMST-DCUA significantly outperforms LMST and the standard 8. On the Quality of Service of We model the probabilistic behavior of a system Dependable .net Crash-Recovery Failure comprising a failure detector and a monitored crash- and Security Detectors recovery target. We extend failure detectors to take account of failure recovery in the target system. This involves extending QoS measures to include the recovery detection speed and proportion of failures detected. We also extend estimating the parameters of the failure detector to achieve a required QoS to configuring the crash-recovery failure detector. We investigate the impact of the dependability of the monitored process on the QoS of our failure detector. Our analysis indicates that variation in the MTTF and MTTR of the monitored process can have a significant impact on the QoS of our failure detector. Our analysis is supported by simulations that validate our theoretical results.
  • 4. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 9. Layered Approach using Intrusion detection faces challenges an intrusion conditional random field detection system must constantly detect malicious activities in a network and must perform efficiently to cope with the large amount of network traffic. These two issues of Accuracy and Efficiency using Conditional Random Fields and Layered Approach. We show that high attack detection accuracy can be achieved by using Conditional Random Fields and high efficiency by implementing the Layered Approach. Experimental results on the benchmark KDD ’99 intrusion data set show that our proposed system based on Layered Conditional Random Fields outperforms other well-known methods such as the decision trees and the naive Bayes. The improvement in attack detection accuracy is very high, particularly, for the U2R attacks (34.8 percent improvement) and the R2L attacks (34.5 percent improvement). Statistical Tests also demonstrate higher confidence in detection accuracy for our method. Finally, we show that our system is robust and is able to handle noisy data without compromising performance. 10. Privacy-Preserving Sharing Privacy-preserving sharing of sensitive information Security and .net of Sensitive Information (PPSSI) is motivated by the increasing need for entities privacy (organizations or individuals) that don't fully trust each other to share sensitive information. Many types of entities need to collect, analyze, and disseminate data rapidly and accurately, without exposing sensitive information to unauthorized or untrusted parties. Although statistical methods have been used to protect data for decades, they aren't foolproof and generally involve a trusted third party. Recently, the security research community has studied—and, in a few cases, deployed—techniques using secure, multiparty function evaluation, encrypted keywords, and private information retrieval. However, few practical tools and technologies provide data privacy, especially when entities have certain common goals and require (or are mandated) some sharing of sensitive information. To this end, PPSSI technology aims to enable sharing information, without exposing more than the minimum necessary to complete a common task. 11. PEACE Security and privacy issues are of most concern in pushing the success of WMNs(Wireless Mesh Networks) for their wide deployment and for supporting service-oriented applications. Despite the necessity, limited security research has been conducted toward privacy preservation in WMNs. This motivates us to develop PEACE, a novel Privacy- Enhanced yet Accountable security framework, tailored for WMNs
  • 5. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 12. The Phish-Market Protocol: One way banks mitigate phishing's effects is to remove .net Secure Sharing Between fraudulent websites or suspend abusive domain Competitors names. The removal process, called a "take-down," is often subcontracted to specialist firms, who refuse to share feeds of phishing website URLs with each other. Consequently, many phishing websites aren't removed. The take-down companies are reticent to exchange feeds, fearing that competitors with less comprehensive lists might free-ride off their efforts. Here, the authors propose the Phish-Market protocol, which enables companies to be compensated for information they provide to their competitors, encouraging them to share. The protocol is designed so that the contributing firm is compensated only for those websites affecting its competitor's clients and only those previously unknown to the receiving firm. The receiving firm, on the other hand, is guaranteed privacy for its client list. The protocol solves a more general problem of sharing between competitors; applications to data brokers in marketing, finance, energy exploration, and beyond could also benefit. 13. Internet Filtering Issues Various governments have been considering .net and Challenges mechanisms to filter out illegal or offensive Internet material. The accompanying debate raises a number of questions from a technical perspective. This article explores some of these questions, such as, what filtering techniques exist,are they effective in filtering out the specific content, how easy is circumventing them ,where should they be placed in the Internet architecture. 14. Can Public-Cloud Security Because cloud-computing environments' security .net Meet Its Unique vulnerabilities differ from those of traditional data Challenges? centers, perimeter-security approaches will no longer work. Security must move from the perimeter to the virtual machines. 15. Encrypting Keys Securely Encryption keys are sometimes encrypted themselves; .net doing that properly requires special care. Although it might look like an oversight at first, the broadly accepted formal security definitions for cryptosystems don't allow encryption of key-dependent messages. Furthermore, key-management systems frequently use key encryption or wrapping, which might create dependencies among keys that lead to problems with simple access-control checks. Security professionals should be aware of this risk and take appropriate measures. Novel cryptosystems offer protection for key-dependent messages and should be considered for practical use. Through enhanced access control in key- management systems, you can prevent security- interface attacks. 16. Auto-Context and Its The notion of using context information for solving Pattern .net Application to High-Level high-level vision and medical image segmentation Analysis and Vision Tasks and 3D Brain problems has been increasingly realized in the field. Machine Image Segmentation However, how to learn an effective and efficient Intelligence context model, together with an image appearance
  • 6. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 model, remains mostly unknown. The current literature using Markov Random Fields (MRFs) and Conditional Random Fields (CRFs) often involves specific algorithm design in which the modeling and computing stages are studied in isolation. In this paper, we propose a learning algorithm, auto-context. Given a set of training images and their corresponding label maps, we first learn a classifier on local image patches. The discriminative probability (or classification confidence) maps created by the learned classifier are then used as context information, in addition to the original image patches, to train a new classifier. The algorithm then iterates until convergence. Auto-context integrates low-level and context information by fusing a large number of low- level appearance features with context and implicit shape information. The resulting discriminative algorithm is general and easy to implement. Under nearly the same parameter settings in training, we apply the algorithm to three challenging vision applications: foreground/background segregation, human body configuration estimation, and scene region labeling. Moreover, context also plays a very important role in medical/brain images where the anatomical structures are mostly constrained to relatively fixed positions. With only some slight changes resulting from using 3D instead of 2D features, the auto-context algorithm applied to brain MRI image segmentation is shown to outperform state-of-the-art algorithms specifically designed for this domain. Furthermore, the scope of the proposed algorithm goes beyond image analysis and it has the potential to be used for a wide variety of problems for structured prediction problems. 17. CSMA protocol Mitigating This system is developed to show the descriptive java Performance Degradation management of dreadful conditions in Congested in Congested Sensor Sensor Networks. The dreadful conditions in sensor Networks networks or any other wired networks will happen when bandwidth differs from receiving and sending points. The channel capacity of the network may not be sufficient enough to handle the speed of packets sent. In this system, we are presenting a view, how the data can be sent through the congested channel and also the safe delivery of the packets to the destination. This System is developed using java swing technology with jdk1.6. All the nodes are developed as swing API‘s.Multiple API‘s form a sink to the destination. The packets will be sent from Source to destination, via sink. In the sink, a node will be made congested and using channel capacity, the path of data will be calculated. Based on the result of the calculation, the congestion in the sink will be dissolved and data is set free to the destination.This system is an application to maintain the free flow of data in congested sensor networks using Differentiated Routing Protocol and Priority Queues, which maintain priority in data-types.
  • 7. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 18. Feature Analysis and The definition of parameters is a crucial step in the Multimedia .net Evaluation for Automatic development of a system for identifying emotions in Emotion Identification in speech. Although there is no agreement on which are Speech the best features for this task, it is generally accepted that prosody carries most of the emotional information. Most works in the field use some kind of prosodic features, often in combination with spectral and voice quality parametrizations. Nevertheless, no systematic study has been done comparing these features. This paper presents the analysis of the characteristics of features derived from prosody, spectral envelope, and voice quality as well as their capability to discriminate emotions. In addition, early fusion and late fusion techniques for combining different information sources are evaluated. The results of this analysis are validated with experimental automatic emotion identification tests. Results suggest that spectral envelope features outperform the prosodic ones. Even when different parametrizations are combined, the late fusion of long-term spectral statistics with short-term spectral envelope parameters provides an accuracy comparable to that obtained when all parametrizations are combined. 19. Automatic Detection of Off- Identifying off-task behaviors in intelligent tutoring Learning .net Task Behaviors in systems is a practical and challenging research topic. Technologie Intelligent Tutoring This paper proposes a machine learning model that s Systems with Machine can automatically detect students' off-task behaviors. Learning Techniques The proposed model only utilizes the data available from the log files that record students' actions within the system. The model utilizes a set of time features, performance features, and mouse movement features, and is compared to 1) a model that only utilizes time features and 2) a model that uses time and performance features. Different students have different types of behaviors; therefore, personalized version of the proposed model is constructed and compared to the corresponding nonpersonalized version. In order to address data sparseness problem, a robust Ridge Regression algorithm is utilized to estimate model parameters. An extensive set of experiment results demonstrates the power of using multiple types of evidence, the personalized model, and the robust Ridge Regression algorithm. 20. Web-Application Security: Here's a sobering thought for all managers responsible IT .net From Reactive to Proactive for Web applications: Without proactive consideration for an application's security, attackers can bypass nearly all lower-layer security controls simply by using the application in a way its developers didn't envision. Learn how to address vulnerabilities proactively and early on to avoid the devastating consequences of a successful attack. 21. Trust and Reputation Trust and reputation management research is highly INTERNET .net Management interdisciplinary, involving researchers from COMPUTING networking and communication, data management and information systems, e-commerce and service computing, artificial intelligence, and game theory, as
  • 8. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 well as the social sciences and evolutionary biology. Trust and reputation management has played and will continue to play an important role in Internet and social computing systems and applications. This special issue addresses key issues in the field, such as representation, recommendation aggregation, and attack-resilient reputation systems. 22. Multi-body Structure-and- An efficient and robust framework is proposed for Image .net Motion Segmentation by two-view multiple structure-and-motion segmentation Processing Branch-and-Bound Model of unknown number of rigid objects. The segmentation Selection problem has three unknowns, namely the object memberships, the corresponding fundamental matrices, and the number of objects. To handle this otherwise recursive problem, hypotheses for fundamental matrices are generated through local sampling. Once the hypotheses are available, a combinatorial selection problem is formulated to optimize a model selection cost which takes into account the hypotheses likelihoods and the model complexity. An explicit model for outliers is also added for robust segmentation. The model selection cost is minimized through the branch-and-bound technique of combinatorial optimization. The proposed branch- and-bound approach efficiently searches the solution space and guaranties optimality over the current set of hypotheses. The efficiency and the guarantee of optimality of the method is due to its ability to reject solutions without explicitly evaluating them. The proposed approach was validated with synthetic data, and segmentation results are presented for real images. 23. Active Image Re ranking Image search reranking methods usually fail to .net capture the user's intention when the query term is ambiguous. Therefore, reranking with user interactions, or active reranking, is highly demanded to effectively improve the search performance. The essential problem in active reranking is how to target the user's intention. To complete this goal, this paper presents a structural information based sample selection strategy to reduce the user's labeling efforts. Furthermore, to localize the user's intention in the visual feature space, a novel local-global discriminative dimension reduction algorithm is proposed. In this algorithm, a submanifold is learned by transferring the local geometry and the discriminative information from the labelled images to the whole (global) image database. Experiments on both synthetic datasets and a real Web image search dataset demonstrate the effectiveness of the proposed active reranking scheme, including both the structural information based active sample selection strategy and the local-global discriminative dimension reduction algorithm. 24. Content Based Image An innovative approach based on an evolutionary .net Retrieval using PSO stochastic algorithm, namely the Particle Swarm Optimizer (PSO), is proposed in this paper as a
  • 9. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 solution to the problem of intelligent retrieval of images in large databases. The problem is recast to an optimization one, where a suitable cost function is minimized through a customized PSO. Accordingly, the relevance-feedback is used in order to exploit the information of the user with the aim of both guiding the particles inside the search space and dynamically assigning different weights to the features. 25. Automatic Composition of This paper presents a novel approach for semantic .net Semantic Web Services An web service composition based on traditional state Enhanced State Space space search approach. We regard automatic web Search Approach service composition problem as an AI problem-solving problem and propose an enhanced state space search approach toward web service composition domain. This approach can not only be used for automatic service composition, but also for general problem- solving domain. In addition, in order to validate the feasibility of our approach, a prototype system is implemented. 26. Knowledge-first web Although semantic technologies aren't used in current .net services an E-Government software systems on a large scale yet, they offer high example potential to significantly improve the quality of electronic services especially in the E-Government domain. This paper therefore presents an approach that not only incorporates semantic technologies but allows to create E-Government services solely based on semantic models. This multiplies the benefits of the ontology modeling efforts, minimizes development and maintenance time and costs, improves user experience and enforces transparency. 27. The Applied Research of This paper firstly introduces the characteristics of the Cloud .net Cloud Computing Platform current E-Learning, and then analyzes the concept and Architecture In the E- characteristics of cloud computing, and describes the computing Learning Area architecture of cloud computing platform; by combining the characteristics of E-Learning and learning from current major infrastructure approach of cloud computing platform, this paper structures a relatively complete set of integration and use in one of the E-Learning platform, puts the cloud computing platform apply to the study of E-Learning, and focus on the application in order to improve the resources' stability, balance and utilization; under the conditions, this platform will meet the demand for the current teaching and research activities, improve the greatest value of the E-Learning. 28. Cloud Computing System Cloud computing provides people a way to share large .net Based on Trusted mount of distributed resources belonging to different Computing Platform organizations. That is a good way to share many kinds of distributed resources, but it also makes security problems more complicate and more important for users than before. In this paper, we analyze some security requirements in cloud computing environment. Since the security problems both in software and hardware, we provided a method to build a trusted computing environment for cloud
  • 10. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 computing by integrating the trusted computing platform (TCP) into cloud computing system. We propose a new prototype system, in which cloud computing system is combined with Trusted Platform Support Service (TSS) and TSS is based on Trusted Platform Module (TPM). In this design, better effect can be obtained in authentication, role based access and data protection in cloud computing environment. 29. IT Auditing to Assure a In this paper we discuss the evolvement of cloud .net Secure Cloud Computing. computing paradigm and present a framework for secure cloud computing through IT auditing. Our approach is to establish a general framework using checklists by following data flow and its lifecycle. The checklists are made based on the cloud deployment models and cloud services models. The contribution of the paper is to understand the implication of cloud computing and what is meant secure cloud computing via IT auditing rather than propose a new methodology and new technology to secure cloud computing. Our holistic approach has strategic value to those who are using or consider using cloud computing because it addresses concerns such as security, privacy and regulations and compliance. 30. Performance Evaluation of Advanced computing on cloud computing .net Cloud Computing Offerings infrastructures can only become viable alternative for the enterprise if these infrastructures can provide proper levels of nonfunctional properties (NPFs). A company that focuses on service-oriented architectures (SOA) needs to know what configuration would provide the proper levels for individual services if they are deployed in the cloud. In this paper we present an approach for performance evaluation of cloud computing configurations. While cloud computing providers assure certain service levels, this it typically done for the platform and not for a particular service instance. Our approach focuses on NFPs of individual services and thereby provides a more relevant and granular information. An experimental evaluation in Amazon Elastic Compute Cloud (EC2) verified our approach. 31. Providing Privacy People can only enjoy the full benefits of Cloud .net Preserving in cloud computing if we can address the very real privacy and computing security concerns that come along with storing sensitive personal information in databases and software scattered around the Internet. There are many service provider in the internet, we can call each service as a cloud, each cloud service will exchange data with other cloud, so when the data is exchanged between the clouds, there exist the problem of disclosure of privacy. So the privacy disclosure problem about individual or company is inevitably exposed when releasing or sharing data in the cloud service. Privacy is an important issue for cloud computing, both in terms of legal compliance and user
  • 11. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 trust, and needs to be considered at every phase of design. Our paper provides some privacy preserving technologies used in cloud computing services. 32. VEBEK: Virtual Energy- Designing cost-efficient, secure network protocols for Wireless .net Based Encryption and Wireless Sensor Networks (WSNs) is a challenging Computing Keying for Wireless Sensor problem because sensors are resource-limited Networks wireless devices. Since the communication cost is the most dominant factor in a sensor's energy consumption, we introduce an energy-efficient Virtual Energy-Based Encryption and Keying (VEBEK) scheme for WSNs that significantly reduces the number of transmissions needed for rekeying to avoid stale keys. In addition to the goal of saving energy, minimal transmission is imperative for some military applications of WSNs where an adversary could be monitoring the wireless spectrum. VEBEK is a secure communication framework where sensed data is encoded using a scheme based on a permutation code generated via the RC4 encryption mechanism. The key to the RC4 encryption mechanism dynamically changes as a function of the residual virtual energy of the sensor. Thus, a one-time dynamic key is employed for one packet only and different keys are used for the successive packets of the stream. The intermediate nodes along the path to the sink are able to verify the authenticity and integrity of the incoming packets using a predicted value of the key generated by the sender's virtual energy, thus requiring no need for specific rekeying messages. VEBEK is able to efficiently detect and filter false data injected into the network by malicious outsiders. The VEBEK framework consists of two operational modes (VEBEK-I and VEBEK-II), each of which is optimal for different scenarios. In VEBEK-I, each node monitors its one-hop neighbors where VEBEK-II statistically monitors downstream nodes. We have evaluated VEBEK's feasibility and performance analytically and through simulations. Our results show that VEBEK, without incurring transmission overhead (increasing packet size or sending control messages for rekeying), is able to eliminate malicious data from the network in an energy-efficient manner. We also show that our framework performs be- - tter than other comparable schemes in the literature with an overall 60-100 percent improvement in energy savings without the assumption of a reliable medium access control layer. 33. Secure Data Collection in Compromised node and denial of service are two key .net Wireless Sensor Networks attacks in wireless sensor networks (WSNs). In this Using Randomized paper, we study data delivery mechanisms that can Dispersive Routes with high probability circumvent black holes formed by these attacks. We argue that classic multipath routing approaches are vulnerable to such attacks, mainly due to their deterministic nature. So once the adversary acquires the routing algorithm, it can compute the same routes known to the source, hence, making all information sent over these routes vulnerable to its attacks. In this paper, we develop
  • 12. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 mechanisms that generate randomized multipath routes. Under our designs, the routes taken by the ?? shares?? of different packets change over time. So even if the routing algorithm becomes known to the adversary, the adversary still cannot pinpoint the routes traversed by each packet. Besides randomness, the generated routes are also highly dispersive and energy efficient, making them quite capable of circumventing black holes. We analytically investigate the security and energy performance of the proposed schemes. We also formulate an optimization problem to minimize the end-to-end energy consumption under given security constraints. Extensive simulations are conducted to verify the validity of our mechanisms. 34. Aging Bloom Filter with A Bloom filter is a simple but powerful data structure Data Mining .net Two Active Buffers for that can check membership to a static set. As Bloom Dynamic Sets. filters become more popular for network applications, a membership query for a dynamic set is also required. Some network applications require high-speed processing of packets. For this purpose, Bloom filters should reside in a fast and small memory, SRAM. In this case, due to the limited memory size, stale data in the Bloom filter should be deleted to make space for new data. Namely the Bloom filter needs aging like LRU caching. In this paper, we propose a new aging scheme for Bloom filters. The proposed scheme utilizes the memory space more efficiently than double buffering, the current state of the art. We prove theoretically that the proposed scheme outperforms double buffering. We also perform experiments on real Internet traces to verify the effectiveness of the proposed scheme. 35. Bayesian Classifiers The Bayesian classifier is a fundamental classification .net Programmed in SQL technique. In this work, we focus on programming Bayesian classifiers in SQL. We introduce two classifiers: naive Bayes and a classifier based on class decomposition using K-means clustering. We consider two complementary tasks: model computation and scoring a data set. We study several layouts for tables and several indexing alternatives. We analyze how to transform equations into efficient SQL queries and introduce several query optimizations. We conduct experiments with real and synthetic data sets to evaluate classification accuracy, query optimizations, and scalability. Our Bayesian classifier is more accurate than naive Bayes and decision trees. Distance computation is significantly accelerated with horizontal layout for tables, denormalization, and pivoting. We also compare naive Bayes implementations in SQL and C++: SQL is about four times slower. Our Bayesian classifier in SQL achieves high classification accuracy, can efficiently analyze large data sets, and has linear scalability.
  • 13. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 36. Using a web-based tool to Top-down process improvement approaches provide a java define and implement high-level model of what the process of a software software process development organisation should be. Such models are improvement initiatives in based on the consensus of a designated working group a small industrial setting on how software should be developed or maintained. They are very useful in that they provide general guidelines on where to start improving, and in which order, to people who do not know how to do it. However, the majority of models have only worked in scenarios within large companies. The authors aim to help small software development organisations adopt an iterative approach by providing a process improvement web-based tool. This study presents research into a proposal which states that a small organisation may use this tool to assess and improve their software process, identifying and implementing a set of agile project management practices that can be strengthened using the CMMI-DEV 1.2 model as reference. 37. An Online Monitoring 2 Web service technology aims to enable the Java Approach for Web Service interoperation of heterogeneous systems and the Requirements reuse of distributed functions in an unprecedented (An Online Monitoring scale and has achieved significant success. There are Approach for Web Service still, however, challenges to realize its full potential. Requirements –web One of these challenges is to ensure the behaviour of services(ME)) Web services consistent with their requirements. Monitoring events that are relevant to Web service requirements is, thus, an important technique. This paper introduces an online monitoring approach for Web service requirements. It includes a pattern-based specification of service constraints that correspond to service requirements, and a monitoring model that covers five kinds of system events relevant to client request, service response, application, resource, and management, and a monitoring framework in which different probes and agents collect events and data that are sensitive to requirements. The framework analyzes the collected information against the prespecified constraints, so as to evaluate the behaviour and use of Web services. The prototype implementation and experiments with a case study shows that our approach is effective and flexible, and the monitoring cost is affordable.
  • 14. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 S.NO TITLE -2011 ABSTRACT DOMAIN PLATFORM 1. Exploiting Dynamic In recent years ad hoc parallel data processing has Parallel Resource Allocation emerged to be one of the killer applications for Distribution for Efficient Parallel Infrastructure-as-a-Service (IaaS) clouds. Major Cloud Data Processing in computing companies have started to integrate the Cloud frameworks for parallel data processing in their product portfolio, making it easy for customers to access these services and to deploy their programs. However, the processing frameworks which are currently used have been designed for static, homogeneous cluster setups and disregard the particular nature of a cloud. Consequently, the allocated compute resources may be inadequate for big parts of the submitted job and unnecessarily increase processing time and cost. In this paper, we discuss the opportunities and challenges for efficient parallel data processing in clouds and present our research project Nephele. Nephele is the first data processing framework to explicitly exploit the dynamic resource allocation offered by today's IaaS clouds for both, task scheduling and execution. Particular tasks of a processing job can be assigned to different types of virtual machines which are automatically instantiated and terminated during the job execution. Based on this new framework, we perform extended evaluations of MapReduce-inspired processing jobs on an IaaS cloud system and compare the results to the popular data processing framework Hadoop. 2. Data integrity proofs Cloud computing has been envisioned as the de-facto Communicat in cloud storage solution to the rising storage costs of IT Enterprises. ion System & With the high costs of data storage devices as well as the network rapid rate at which data is being generated it proves costly for enterprises or individual users to frequently update their hardware. Apart from reduction in storage costs data outsourcing to the cloud also helps in reducing the maintenance. Cloud storage moves the user’s data to large data centers, which are remotely located, on which user does not have any control. However, this unique feature of the cloud poses many new security challenges which need to be clearly understood and resolved. One of the important concerns that need to be addressed is to assure the customer of the integrity i.e. correctness of his data in the cloud. As the data is physically not accessible to the user the cloud should provide a way for the user to check if the integrity of his data is maintained or is compromised. In this paper we provide a scheme which gives a proof of data integrity in the cloud which the customer can employ to check the correctness of his data in the cloud. This proof can be agreed upon by both the cloud and the customer and can be incorporated in the Service level agreement (SLA). This scheme ensures that the storage at the client side is minimal which will be beneficial for thin clients. 3. Efficient Computing In many applications, including location based services, Knowledge of Range Aggregates queries are not precise. In this paper, we study the & data against Uncertain problem of efficiently computing range aggregates in a engineering Location Based multi-dimensional space when the query location is uncertain. That is, for a set of data points P, an uncertain
  • 15. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 Collections location based query Q with location described by a probabilistic density function, we want to calculate the aggregate information (e.g., count, average} and sum) of the data points within distance gamma to Q with probability at least theta. We propose novel, efficient techniques to solve the problem based on a filtering-and- verification framework. In particular, two novel filtering techniques are proposed to effectively and efficiently remove data points from verification. Finally, we show that our techniques can be immediately extended to solve the range query problem. Comprehensive experiments conducted on both real and synthetic data demonstrate the efficiency and scalability of our techniques. 4. Exploring Natural phenomena show that many creatures form Knowledge Application-Level large social groups and move in regular patterns. & Data Semantics for Data However, previous works focus on finding the movement Engineering Compression patterns of each single object or all objects. In this paper, we first propose an efficient distributed mining algorithm to jointly identify a group of moving objects and discover their movement patterns in wireless sensor networks. Afterward, we propose a compression algorithm, called 2P2D, which exploits the obtained group movement patterns to reduce the amount of delivered data. The compression algorithm includes a sequence merge and an entropy reduction phases. In the sequence merge phase, we propose a Merge algorithm to merge and compress the location data of a group of moving objects. In the entropy reduction phase, we formulate a Hit Item Replacement (HIR) problem and propose a Replace algorithm that obtains the optimal solution. Moreover, we devise three replacement rules and derive the maximum compression ratio. The experimental results show that the proposed compression algorithm leverages the group movement patterns to reduce the amount of delivered data effectively and efficiently. 5. Improving Aggregate Recommender systems are becoming increasingly Knowledge Recommendation important to individual users and businesses for & Data Diversity Using providing personalized recommendations. However, Engineering Ranking-Based while the majority of algorithms proposed in Techniques recommender systems literature have focused on improving recommendation accuracy (as exemplified by the recent Netflix Prize competition), other important aspects of recommendation quality, such as the diversity of recommendations, have often been overlooked. In this paper, we introduce and explore a number of item ranking techniques that can generate recommendations that have substantially higher aggregate diversity across all users while maintaining comparable levels of recommendation accuracy. Comprehensive empirical evaluation consistently shows the diversity gains of the proposed techniques using several real-world rating datasets and different rating prediction algorithms. 6. Monitoring Service Business processes are increasingly distributed and Service Systems from a open, making them prone to failure. Monitoring is, Computing Language-Action therefore, an important concern not only for the processes themselves but also for the services that
  • 16. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 Perspective comprise these processes. We present a framework for multilevel monitoring of these service systems. It formalizes interaction protocols, policies, and commitments that account for standard and extended effects following the language-action perspective, and allows specification of goals and monitors at varied abstraction levels. We demonstrate how the framework can be implemented and evaluate it with multiple scenarios that include specifying and monitoring open- service policy commitments. 7. One Size Does Not Fit With the emergence of the deep Web databases, Knowledge All Towards User- searching in domains such as vehicles, real estate, etc. & data and Query- has become a routine task. One of the problems in this engineering Dependent Ranking context is ranking the results of a user query. Earlier For Web Databases approaches for addressing this problem have used frequencies of database values, query logs, and user profiles. A common thread in most of these approaches is that ranking is done in a user- and/or query- independent manner. This paper proposes a novel query- and user-dependent approach for ranking the results of Web database queries. We present a ranking model, based on two complementary notions of user and query similarity, to derive a ranking function for a given user query. This function is acquired from a sparse workload comprising of several such ranking functions derived for various user-query pairs. The proposed model is based on the intuition that similar users display comparable ranking preferences over the results of similar queries. We define these similarities formally in alternative ways and discuss their effectiveness both analytically and experimentally over two distinct Web databases. 8. Optimal Service Cloud applications that offer data management services Knowledge Pricing for a Cloud are emerging. Such clouds support caching of data in & data Cache order to provide quality query services. The users can engineering query the cloud data, paying the price for the infrastructure they use. Cloud management necessitates an economy that manages the service of multiple users in an efficient, but also, resource-economic way that allows for cloud profit. Naturally, the maximization of cloud profit given some guarantees for user satisfaction presumes an appropriate price-demand model that enables optimal pricing of query services. The model should be plausible in that it reflects the correlation of cache structures involved in the queries. Optimal pricing is achieved based on a dynamic pricing scheme that adapts to time changes. This paper proposes a novel price-demand model designed for a cloud cache and a dynamic pricing scheme for queries executed in the cloud cache. The pricing solution employs a novel method that estimates the correlations of the cache services in an time-efficient manner. The experimental study shows the efficiency of the solution. 9. A Personalized As a model for knowledge description and formalization, Knowledge Ontology Model for ontologies are widely used to represent user profiles in & data Web Information personalized web information gathering. However, when engineering Gathering representing user profiles, many models have utilized only knowledge from either a global knowledge base or a
  • 17. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 user local information. In this paper, a personalized ontology model is proposed for knowledge representation and reasoning over user profiles. This model learns ontological user profiles from both a world knowledge base and user local instance repositories. The ontology model is evaluated by comparing it against benchmark models in web information gathering. The results show that this ontology model is successful. 10. A Branch-and-Bound In branch-and-bound (B&B) schemes for solving a Computers Algorithm for Solving minimization problem, a better lower bound could prune the Multiprocessor many meaningless branches which do not lead to an Scheduling Problem optimum solution. In this paper, we propose several with Improved techniques to refine the lower bound on the makespan in Lower Bounding the multiprocessor scheduling problem (MSP). The key Techniques idea of our proposed method is to combine an efficient quadratic-time algorithm for calculating the Fernández's bound, which is known as the best lower bounding technique proposed in the literature with two improvements based on the notions of binary search and recursion. The proposed method was implemented as a part of a B&B algorithm for solving MSP, and was evaluated experimentally. The result of experiments indicates that the proposed method certainly improves the performance of the underlying B&B scheme. In particular, we found that it improves solutions generated by conventional heuristic schemes for more than 20 percent of randomly generated instances, and for more than 80 percent of instances, it could provide a certification of optimality of the resulting solutions, even when the execution time of the B&B scheme is limited by one minute. 11. Design and Peer-to-peer (P2P) systems generate a major fraction of Computers Evaluation of a Proxy the current Internet traffic, and they significantly Cache for Peer-to- increase the load on ISP networks and the cost of Peer Traffic running and connecting customer networks (e.g., universities and companies) to the Internet. To mitigate these negative impacts, many previous works in the literature have proposed caching of P2P traffic, but very few (if any) have considered designing a caching system to actually do it. This paper demonstrates that caching P2P traffic is more complex than caching other Internet traffic, and it needs several new algorithms and storage systems. Then, the paper presents the design and evaluation of a complete, running, proxy cache for P2P traffic, called pCache. pCache transparently intercepts and serves traffic from different P2P systems. A new storage system is proposed and implemented in pCache. This storage system is optimized for storing P2P traffic, and it is shown to outperform other storage systems. In addition, a new algorithm to infer the information required to store and serve P2P traffic by the cache is proposed. Furthermore, extensive experiments to evaluate all aspects of pCache using actual implementation and real P2P traffic are presented. 12. Robust Feature Feature selection often aims to select a compact feature Computation Selection for subset to build a pattern classifier with reduced al Biology Microarray Data complexity, so as to achieve improved classification and Based on performance. From the perspective of pattern analysis, Bioinformati producing stable or robust solution is also a desired cs
  • 18. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 Multicriterion Fusion property of a feature selection algorithm. However, the issue of robustness is often overlooked in feature selection. In this study, we analyze the robustness issue existing in feature selection for high-dimensional and small-sized gene-expression data, and propose to improve robustness of feature selection algorithm by using multiple feature selection evaluation criteria. Based on this idea, a multicriterion fusion-based recursive feature elimination (MCF-RFE) algorithm is developed with the goal of improving both classification performance and stability of feature selection results. Experimental studies on five gene-expression data sets show that the MCF-RFE algorithm outperforms the commonly used benchmark feature selection algorithm SVM-RFE. 13. Image-Based Surface Emerging technologies for structure matching based on Computation Matching Algorithm surface descriptions have demonstrated their al Biology Oriented to effectiveness in many research fields. In particular, they and Structural Biology can be successfully applied to in silico studies of Bioinformati structural biology. Protein activities, in fact, are related cs to the external characteristics of these macromolecules and the ability to match surfaces can be important to infer information about their possible functions and interactions. In this work, we present a surface-matching algorithm, based on encoding the outer morphology of proteins in images of local description, which allows us to establish point-to-point correlations among macromolecular surfaces using image-processing functions. Discarding methods relying on biological analysis of atomic structures and expensive computational approaches based on energetic studies, this algorithm can successfully be used for macromolecular recognition by employing local surface features. Results demonstrate that the proposed algorithm can be employed both to identify surface similarities in context of macromolecular functional analysis and to screen possible protein interactions to predict pairing capability 14. Iris matching using Iris recognition is one of the most widely used biometric Computer multi-dimensional technique for personal identification. This identification Vision, IET artificial neural is achieved in this work by using the concept that, the iris network patterns are statistically unique and suitable for biometric measurements. In this study, a novel method of recognition of these patterns of an iris is considered by using a multidimensional artificial neural network. The proposed technique has the distinct advantage of using the entire resized iris as an input at once. It is capable of excellent pattern recognition properties as the iris texture is unique for every person used for recognition. The system is trained and tested using two publicly available databases (CASIA and UBIRIS). The proposed approach shows significant promise and potential for improvements, compared with the other conventional matching techniques with regard to time and efficiency of results. 15. Real-time tracking Many vision problems require fast and accurate tracking Computer using A* heuristic of objects in dynamic scenes. In this study, we propose Vision, IET search and template an A* search algorithm through the space of transformations for computing fast target 2D motion.
  • 19. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 updating Two features are combined in order to compute efficient motion: (i) Kullback??Leibler measure as heuristic to guide the search process and (ii) incorporation of target dynamics into the search process for computing the most promising search alternatives. The result value of the quality of match computed by the A* search algorithm together with the more common views of the target object are used for verifying template updates. A template will be updated only when the target object has evolved to a transformed shape dissimilar with respect to the actual shape. The study includes experimental evaluations with video streams demonstrating the effectiveness and efficiency for real-time vision based tasks with rigid and deformable objects. 16. Integral image The large amount of image data from the captured three- Computer compression based dimensional integral image requires to be presented Vision, IET on optical with adequate resolution. It is therefore necessary to characteristic develop compression algorithms that take advantage of the characteristics of the recorded integral image. In this study, the authors propose a new compression method that is adapted to integral imaging. According to the optical characteristics of integral imaging, most of the information of each elemental image is overlapped with that of its adjacent elemental images. Thus, the method is to achieve image compression by taking a sample from the elemental image sequence for every m elemental image to get image compression. Experimental results that are presented to illustrate the proposed compression technique prove that the proposed technique can improve the compression ratio of integral imaging 17. A Variational Model In this paper, we propose a variational formulation for Image for Histogram histogram transfer of two or more color images. We Processing Transfer of Color study an energy functional composed by three terms: Images one tends to approach the cumulative histograms of the transformed images, the other two tend to maintain the colors and geometry of the original images. By minimizing this energy, we obtain an algorithm that balances equalization and the conservation of features of the original images. As a result, they evolve while approaching an intermediate histogram between them. This intermediate histogram does not need to be specified in advance, but it is a natural result of the model. Finally, we provide experiments showing that the proposed method compares well with the state of the art. 18. Nonlocal Mumford- We propose here a class of restoration algorithms for Image Shah Regularizers color images, based upon the Mumford-Shah (MS) model Processing for Color Image and nonlocal image information. The Ambrosio- Restoration Tortorelli and Shah elliptic approximations are defined to work in a small local neighborhood, which are sufficient to denoise smooth regions with sharp boundaries. However, texture is nonlocal in nature and requires semilocal/non-local information for efficient image denoising and restoration. Inspired from recent works (nonlocal means of Buades, Coll, Morel, and nonlocal total variation of Gilboa, Osher), we extend the local Ambrosio-Tortorelli and Shah approximations to MS functional (MS) to novel nonlocal formulations, for better restoration of fine structures and texture. We
  • 20. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 present several applications of the proposed nonlocal MS regularizers in image processing such as color image denoising, color image deblurring in the presence of Gaussian or impulse noise, color image inpainting, color image super-resolution, and color filter array demosaicing. In all the applications, the proposed nonlocal regularizers produce superior results over the local ones, especially in image inpainting with large missing regions. We also prove several characterizations of minimizers based upon dual norm Formulations. 19. A Majorize–Minimize This paper proposes accelerated subspace optimization Image Strategy for methods in the context of image restoration. Subspace Processing Subspace optimization methods belong to the class of iterative Optimization Applied descent algorithms for unconstrained optimization. At to Image Restoration each iteration of such methods, a stepsize vector allowing the best combination of several search directions is computed through a multidimensional search. It is usually obtained by an inner iterative second-order method ruled by a stopping criterion that guarantees the convergence of the outer algorithm. As an alternative, we propose an original multidimensional search strategy based on the majorize-minimize principle. It leads to a closed-form stepsize formula that ensures the convergence of the subspace algorithm whatever the number of inner iterations. The practical efficiency of the proposed scheme is illustrated in the context of edge-preserving image restoration. 20. A Variational Model We propose a variant of the Mumford-Shah model for the Image for Segmentation of segmentation of a pair of overlapping objects with Processing Overlapping Objects additive intensity value. Unlike standard segmentation With Additive models, it does not only determine distinct objects in the Intensity Value. image, but also recover the possibly multiple membership of the pixels. To accomplish this, some a priori knowledge about the smoothness of the object boundary is integrated into the model. Additivity is imposed through a soft constraint which allows the user to control the degree of additivity and is more robust than the hard constraint. We also show analytically that the additivity parameter can be chosen to achieve some stability conditions. To solve the optimization problem involving geometric quantities efficiently, we apply a multiphase level set method. Segmentation results on synthetic and real images validate the good performance of our model, and demonstrate the model's applicability to images with multiple channels and multiple objects. 21. Image Segmentation This paper presents a multiphase fuzzy region Image Using Fuzzy Region competition model that takes into account spatial and Processing Competition and frequency information for image segmentation. In the Spatial/Frequency proposed energy functional, each region is represented Information by a fuzzy membership function and a data fidelity term that measures the conformity of spatial and frequency data within each region to (generalized) Gaussian densities whose parameters are determined jointly with the segmentation process. Compared with the classical region competition model, our approach gives soft segmentation results via the fuzzy membership functions, and moreover, the use of frequency data provides additional region information that can improve the overall segmentation result. To efficiently solve the
  • 21. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 minimization of the energy functional, we adopt an alternate minimization procedure and make use of Chambolle's fast duality projection algorithm. We apply the proposed method to synthetic and natural textures as well as real-world natural images. Experimental results show that our proposed method has very promising segmentation performance compared with the current state-of-the-art approaches. 22. H.264 video More people are studying digital video stream Image watermarking with transference via networks. However, frequent Internet Processing, secret image sharing use increases the requirement for copyright protection IET and security. As a consequence, to prevent video streams that belong to rightful owners from being intentionally or unknowingly used by others, information protection is indispensable. The authors propose a novel method for video watermarking that is specifically designed for H.264 video. For the experiment, a low-energy signal can relatively guard against low-pass filter attacks. Conversely, a high-energy signal in the host signal can relatively guard against the high-frequency noise attack. In view of these facts, the proposed system design embedding algorithm provides high-energy and low- energy blocks. The blocks in the host image frame are divided into two different groups by estimating the block energy. The existing singular value decomposition methods were employed to calculate the watermark information. In order to enhance the security, the proposed system also employs torus automorphisms to encrypt the watermark. To achieve better robustness, the encrypted results use secret image sharing technology embedded into different I-frames in the video stream. 23. Rotation, scaling, and Traditional watermarking schemes are sensitive to Image translation resilient geometric distortions, in which synchronisation for Processing watermarking for recovering embedded information is a challenging task images because of the disorder caused by rotation, scaling or translation (RST). The existing RST-resistant watermarking methods still have limitations with respect to robustness, capacity or fidelity. In this study, the authors address several major problems in RST- invariant watermarking. The first point is how to take advantage of the high RST resilience of scale-invariant feature transform (SIFT) features, which show good performance in terms of RSTresistant pattern recognition. Since many keypoint-based watermarking methods do not discuss cropping attacks, the second issue discussed in this study is how to resist cropping using a human visual system (HVS), which also helps us to eliminate computational complexity. The third issue is the investigation of an HVS-based watermarking strategy for extracting only feature points in the human attentive area. Lastly, a variable-length watermark synchronisation algorithm using dynamic programming is proposed. Experimental results show that the proposed algorithms are practical and show superior performance in comparison with many existing works in terms of watermark capacity, watermark transparency, and the resistance to RST attacks. 24. Improvements on For classification problems, the generalized eigenvalue Neural
  • 22. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 Twin Support Vector proximal support vector machine (GEPSVM) and twin Networks Machines support vector machine (TWSVM) are regarded as milestones in the development of the powerful SVMs, as they use the nonparallel hyperplane classifiers. In this brief, we propose an improved version, named twin bounded support vector machines (TBSVM), based on TWSVM. The significant advantage of our TBSVM over TWSVM is that the structural risk minimization principle is implemented by introducing the regularization term. This embodies the marrow of statistical learning theory, so this modification can improve the performance of classification. In addition, the successive overrelaxation technique is used to solve the optimization problems to speed up the training procedure. Experimental results show the effectiveness of our method in both computation time and classification accuracy, and therefore confirm the above conclusion further. 25. Feature Selection This paper presents a new wrapper-based feature Neural Using Probabilistic selection method for support vector regression (SVR) Networks Prediction of Support using its probabilistic predictions. The method computes Vector Regression the importance of a feature by aggregating the difference, over the feature space, of the conditional density functions of the SVR prediction with and without the feature. As the exact computation of this importance measure is expensive, two approximations are proposed. The effectiveness of the measure using these approximations, in comparison to several other existing feature selection methods for SVR, is evaluated on both artificial and real-world problems. The result of the experiments show that the proposed method generally performs better than, or at least as well as, the existing methods, with notable advantage when the dataset is sparse. 26. Energy-Efficient In cooperative networks, transmitting and receiving Networking Protocol for nodes recruit neighboring nodes to assist in Cooperative communication. We model a cooperative transmission Networks link in wireless networks as a transmitter cluster and a receiver cluster. We then propose a cooperative communication protocol for establishment of these clusters and for cooperative transmission of data. We derive the upper bound of the capacity of the protocol, and we analyze the end-to-end robustness of the protocol to data-packet loss, along with the tradeoff between energy consumption and error rate. The analysis results are used to compare the energy savings and the end-to-end robustness of our protocol with two non-cooperative schemes, as well as to another cooperative protocol published in the technical literature. The comparison results show that, when nodes are positioned on a grid, there is a reduction in the probability of packet delivery failure by two orders of magnitude for the values of parameters considered. Up to 80% in energy savings can be achieved for a grid topology, while for random node placement our cooperative protocol can save up to 40% in energy consumption relative to the other protocols. The reduction in error rate and the energy savings translate into increased lifetime of cooperative sensor networks. 27. Parametric Methods This paper develops parametric methods to detect Networking
  • 23. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 for Anomaly network anomalies using only aggregate traffic statistics, Detection in in contrast to other works requiring flow separation, Aggregate Traffic even when the anomaly is a small fraction of the total traffic. By adopting simple statistical models for anomalous and background traffic in the time domain, one can estimate model parameters in real time, thus obviating the need for a long training phase or manual parameter tuning. The proposed bivariate parametric detection mechanism (bPDM) uses a sequential probability ratio test, allowing for control over the false positive rate while examining the tradeoff between detection time and the strength of an anomaly. Additionally, it uses both traffic-rate and packet-size statistics, yielding a bivariate model that eliminates most false positives. The method is analyzed using the bit-rate signal-to-noise ratio (SNR) metric, which is shown to be an effective metric for anomaly detection. The performance of the bPDM is evaluated in three ways. First, synthetically generated traffic provides for a controlled comparison of detection time as a function of the anomalous level of traffic. Second, the approach is shown to be able to detect controlled artificial attacks over the University of Southern California (USC), Los Angeles, campus network in varying real traffic mixes. Third, the proposed algorithm achieves rapid detection of real denial-of-service attacks as determined by the replay of previously captured network traces. The method developed in this paper is able to detect all attacks in these scenarios in a few seconds or less. 28. Peering Equilibrium It is generally admitted that interdomain peering links Networking Multipath Routing: A represent nowadays the main bottleneck of the Internet, Game Theory particularly because of lack of coordination between Framework for providers, which use independent and “selfish” routing Internet Peering policies. We are interested in identifying possible “light” Settlements coordination strategies that would allow carriers to better control their peering links while preserving their independence and respective interests. We propose a robust multipath routing coordination framework for peering carriers, which relies on the multiple-exit discriminator (MED) attribute of Border Gateway Protocol (BGP) as signaling medium. Our scheme relies on a game theory modeling, with a non-cooperative potential game considering both routing and congestions costs. Peering equilibrium multipath (PEMP) coordination policies can be implemented by selecting Pareto-superior Nash equilibria at each carrier. We compare different PEMP policies to BGP Multipath schemes by emulating a realistic peering scenario. Our results show that the routing cost can be decreased by roughly 10% with PEMP. We also show that the stability of routes can be significantly improved and that congestion can be practically avoided on the peering links. Finally, we discuss practical implementation aspects and extend the model to multiple players highlighting the possible incentives for the resulting extended peering framework. 29. Impact of File Traditionally, it had been assumed that the efficiency Networking Arrivals and requirements of TCP dictate that the buffer size at the Departures on Buffer router must be of the order of the bandwidth-delay (C ×
  • 24. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 Sizing in Core RTT) product. Recently, this assumption was questioned Routers in a number of papers, and the rule was shown to be conservative for certain traffic models. In particular, by appealing to statistical multiplexing, it was shown that on a router with N long-lived connections, buffers of size O([(C × RTT)/(√N)]) or even O(1) are sufficient. In this paper, we reexamine the buffer-size requirements of core routers when flows arrive and depart. Our conclusion is as follows: If the core-to-access-speed ratio is large, then O(1) buffers are sufficient at the core routers; otherwise, larger buffer sizes do improve the flow-level performance of the users. From a modeling point of view, our analysis offers two new insights. First, it may not be appropriate to derive buffer-sizing rules by studying a network with a fixed number of users. In fact, depending upon the core-to-access-speed ratio, the buffer size itself may affect the number of flows in the system, so these two parameters (buffer size and number of flows in the system) should not be treated as independent quantities. Second, in the regime where the core-to-access-speed ratio is large, we note that the O(1) buffer sizes are sufficient for good performance and that no loss of utilization results, as previously believed. 30. Dynamic Traffic monitoring is a critical network operation for the Networking measurement-aware purpose of traffic accounting, debugging or routing in practice troubleshooting, forensics, and traffic engineering. Existing techniques for traffic monitoring, however, tend to be suboptimal due to poor choice of monitor location or constantly evolving monitoring objectives and traffic characteristics. One way to counteract these limitations is to use routing as a degree of freedom to enhance monitoring efficacy, which we refer to as measurement- aware routing. Traffic sub-populations can be routed (rerouted) on the fly to optimally leverage existing monitoring infrastructures. Implementing dynamic measurementaware routing (DMR) in practice is riddled with challenges. Three major challenges are how to dynamically assess the importance of traffic flows; how to aggregate flows (and hence take a common action for them) in order to conserve routing table entries; and how to achieve traffic routing/rerouting in a manner that is least disruptive to normal network performance while maximizing the measurement utility. This article takes a closer look at these challenges and discusses how they manifest for different types of networks. Through an OpenFlow prototype, we show how DMR can be applied in enterprise networks. Using global iceberg detection and capture as a driving application, we demonstrate how our solutions successfully route suspected iceberg flows to a DPI box for further processing, while preserving balanced load distribution in the overall network. 31. Measurement and Through measurement study, we discover an interesting Networking diagnosis of address phenomenon, P2P address misconfiguration, in which a misconfigured P2P large number of peers send P2P file downloading traffic requests to a ??random?? target on the Internet. Through measuring three large datasets spanning four years and across five different /8 networks, we find address- misconfigured P2P traffic on average contributes 38.9
  • 25. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 percent of Internet background radiation, increasing by more than 100 percent every year. To detect and diagnose such unwanted traffic, we design the P2PScope, a measurement tool. After analyzing about 2 Tbytes of data and tracking millions of peers, we found that in all the P2P systems, address misconfiguration is caused by resource mapping contamination: the sources returned for a given file ID through P2P indexing are not valid. Different P2P systems have different reasons for such contamination. For eMule, we find that the root cause is mainly a network byte-order problem in the eMule Source Exchange protocol. For BitTorrent misconfiguration, one reason is that anti-P2P companies actively inject bogus peers into the P2P system. Another reason is that the KTorrent implementation has a byte- order problem. 32. Packet traffic: a good The wireless sensor network (WSN) has emerged as a Networking data source for promising technology. In WSNs, sensor nodes are wireless sensor distributedly deployed to collect interesting information network modeling from the environment. Because of the mission of WSNs, and anomaly most node-wide as well as network-wide activities are detection manifested in packet traffic. As a result, packet traffic becomes a good data source for modeling sensor node as well as sensor network behaviors. In this article, the methodology of modeling node and network behavior profiles using packet traffic is exemplified. In addition, node as well as network anomalies are shown to be detectable by monitoring the evolution of node/network behavior profiles. 33. Experiences of Since the early days of the Internet, network traffic Networking Internet traffic monitoring has always played a strategic role in monitoring with tstat understanding and characterizing users?? activities. In this article, we present our experience in engineering and deploying Tstat, an open source passive monitoring tool that has been developed in the past 10 years. Started as a scalable tool to continuously monitor packets that flow on a link, Tstat has evolved into a complex application that gives network researchers and operators the possibility to derive extended and complex measurements thanks to advanced traffic classifiers. After discussing Tstat capabilities and internal design, we present some examples of measurements collected deploying Tstat at the edge of several ISP networks in past years. While other works report a continuous decline of P2P traffic with streaming and file hosting services rapidly increasing in popularity, the results presented in this article picture a different scenario. First, P2P decline has stopped, and in the last months of 2010 there was a counter tendency to increase P2P traffic over UDP, so the common belief that UDP traffic is negligible is not true anymore. Furthermore, streaming and file hosting applications have either stabilized or are experiencing decreasing traffic shares. We then discuss the scalability issues software-based tools have to cope with when deployed in real networks, showing the importance of properly identifying bottlenecks. 34. Network traffic Modern computer networks are increasingly pervasive, Networking monitoring, analysis complex, and ever-evolving due to factors like enormous and anomaly growth in the number of network users, continuous
  • 26. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 detection [Guest appearance of network applications, increasing amount Editorial] of data transferred, and diversity of user behaviors. Understanding and measuring such a network is a difficult yet vital task for network management and diagnosis. Network traffic monitoring, analysis, and anomaly detection provide useful tools for understanding network behavior and determining network performance and reliability so as to effectively and promptly troubleshoot and resolve various issues in practice. 35. Scheduling Grid Grid scheduling is essential to Quality of Service Network and Tasks in Face of provisioning as well as to efficient management of grid Service Uncertain resources. Grid scheduling usually considers the state of Management Communication the grid resources as well application demands. Demands However, such demands are generally unknown for highly demanding applications, since these often generate data which will be transferred during their execution. Without appropriate assessment of these demands, scheduling decisions can lead to poor performance. Thus, it is of paramount importance to consider uncertainties in the formulation of a grid scheduling problem. This paper introduces the IPDT- FUZZY scheduler, a scheduler which considers the demands of grid applications with such uncertainties. The scheduler uses fuzzy optimization, and both computational and communication demands are expressed as fuzzy numbers. Its performance was evaluated, and it was shown to be attractive when communication requirements are uncertain. Its efficacy is compared, via simulation, to that of a deterministic counterpart scheduler and the results reinforce its adequacy for dealing with the lack of accuracy in the estimation of communication demands. 36. Improving Dynamic application placement for clustered web Network and Application applications heavily influences system performance and Service Placement for quality of user experience. Existing approaches claim Management Cluster-Based Web that they strive to maximize the throughput, keep Applications resource utilization balanced across servers, and minimize the start/stop cost of application instances. However, they fail to minimize the worst case of server utilization; the load balancing performance is not optimal. What's more, some applications need to communicate with each other, which we called dependent applications; the network cost of them also should be taken into consideration. In this paper, we investigate how to minimize the resource utilization of servers in the worst case, aiming at improving load balancing among clustered servers. Our contribution is two-fold. First we propose and define a new optimization objectives: limiting the worst case of each individual server's utilization, formulated by a min-max problem. A novel framework based on binary search is proposed to detect an optimal load balancing solution. Second, we define system cost as the weighted combination of both placement change and inter-application communication cost. By maximizing the number of instances of dependent applications that reside in the same set of servers, the basic load-shifting and placement-change procedures are enhanced to minimize whole system cost.
  • 27. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 Extensive experiments have been conducted and effectively demonstrate that: 1) the proposed framework achieves a good allocation for clustered web applications. In other words, requests are evenly allocated among servers, and throughput is still maximized; 2) the total system cost maintains at a low level; 3) our algorithm has the capacity of approximating an optimal solution within polynomial time and is promising for practical implementation in real deployments. 37. Efficient Network When a link or node fails, flows are detoured around the Network and Modification to failed portion, so the hop count of flows and the link load Service Improve QoS could change dramatically as a result of the failure. As Management Stability at Failure real-time traffic such as video or voice increases on the Internet, ISPs are required to provide stable quality as well as connectivity at failures. For ISPs, how to effectively improve the stability of these qualities at failures with the minimum investment cost is an important issue, and they need to effectively select a limited number of locations to add link facilities. In this paper, efficient design algorithms to select the locations for adding link facilities are proposed and their effectiveness is evaluated using the actual backbone networks of 36 commercial ISPs. 38. Spectral Models for In network measurement systems, packet sampling Network and Bitrate Measurement techniques are usually adopted to reduce the overall Service from Packet Sampled amount of data to collect and process. Being based on a Management Traffic subset of packets, they introduce estimation errors that have to be properly counteracted by using a fine tuning of the sampling strategy and sophisticated inversion methods. This problem has been deeply investigated in the literature with particular attention to the statistical properties of packet sampling and to the recovery of the original network measurements. Herein, we propose a novel approach to predict the energy of the sampling error in the real time estimation of traffic bitrate, based on spectral analysis in the frequency domain. We start by demonstrating that the error introduced by packet sampling can be modeled as an aliasing effect in the frequency domain. Then, we derive closed-form expressions for the Signal-to-Noise Ratio (SNR) to predict the distortion of traffic bitrate estimates over time. The accuracy of the proposed SNR metric is validated by means of real packet traces. Furthermore, a comparison with respect to an analogous SNR expression derived using classic stochastic tools is proposed, showing that the frequency domain approach grants for a higher accuracy when traffic rate measurements are carried out at fine time granularity 39. Vulnerability Systems proposed in academic research have so far Security & Detection Systems: failed to make a significant impact on real-world Privacy Think Cyborg, Not vulnerability detection. Most software bugs are still Robot found by methods with little input from static-analysis and verification research. These research areas could have a significant impact on software security, but first we need a shift in research goals and approaches. We need systems that incorporate human code auditors' knowledge and abilities, and we need evaluation methods that actually test proposed systems' usability in
  • 28. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 real situations. Without changes, academic research will continue to be ignored by the security community, and opportunities to build better tools for finding bugs and understanding software will be missed. 40. Dynamic QoS Service-based systems that are dynamically composed at Software Management and runtime to provide complex, adaptive functionality are Engineering Optimization in currently one of the main development paradigms in Service-Based software engineering. However, the Quality of Service Systems (QoS) delivered by these systems remains an important concern, and needs to be managed in an equally adaptive and predictable way. To address this need, we introduce a novel, tool-supported framework for the development of adaptive service-based systems called QoSMOS (QoS Management and Optimization of Service-based systems). QoSMOS can be used to develop service-based systems that achieve their QoS requirements through dynamically adapting to changes in the system state, environment, and workload. QoSMOS service-based systems translate high-level QoS requirements specified by their administrators into probabilistic temporal logic formulae, which are then formally and automatically analyzed to identify and enforce optimal system configurations. The QoSMOS self-adaptation mechanism can handle reliability and performance-related QoS requirements, and can be integrated into newly developed solutions or legacy systems. The effectiveness and scalability of the approach are validated using simulations and a set of experiments based on an implementation of an adaptive service-based system for remote medical assistance. 41. Seeking Quality of Ranking and optimization of web service compositions Knowledge Web Service represent challenging areas of research with significant and Data Composition in a implications for the realization of the “Web of Services” Engineering Semantic Dimension vision. “Semantic web services” use formal semantic descriptions of web service functionality and interface to enable automated reasoning over web service compositions. To judge the quality of the overall composition, for example, we can start by calculating the semantic similarities between outputs and inputs of connected constituent services, and aggregate these values into a measure of semantic quality for the composition. This paper takes a specific interest in combining semantic and nonfunctional criteria such as quality of service (QoS) to evaluate quality in web services composition. It proposes a novel and extensible model balancing the new dimension of semantic quality (as a functional quality metric) with a QoS metric, and using them together as ranking and optimization criteria. It also demonstrates the utility of Genetic Algorithms to allow optimization within the context of a large number of services foreseen by the “Web of Services” vision. We test the performance of the overall approach using a set of simulation experiments, and discuss its advantages and weaknesses. 42. Mining Cluster-Based Researches on Location-Based Service (LBS) have been Knowledge Temporal Mobile emerging in recent years due to a wide range of potential and Data Sequential Patterns applications. One of the active topics is the mining and Engineering in Location-Based prediction of mobile movements and associated Service transactions. Most of existing studies focus on
  • 29. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 Environments discovering mobile patterns from the whole logs. However, this kind of patterns may not be precise enough for predictions since the differentiated mobile behaviors among users and temporal periods are not considered. In this paper, we propose a novel algorithm, namely, Cluster-based Temporal Mobile Sequential Pattern Mine (CTMSP-Mine), to discover the Cluster- based Temporal Mobile Sequential Patterns (CTMSPs). Moreover, a prediction strategy is proposed to predict the subsequent mobile behaviors. In CTMSP-Mine, user clusters are constructed by a novel algorithm named Cluster-Object-based Smart Cluster Affinity Search Technique (CO-Smart-CAST) and similarities between users are evaluated by the proposed measure, Location- Based Service Alignment (LBS-Alignment). Meanwhile, a time segmentation approach is presented to find segmenting time intervals where similar mobile characteristics exist. To our best knowledge, this is the first work on mining and prediction of mobile behaviors with considerations of user relations and temporal property simultaneously. Through experimental evaluation under various simulated conditions, the proposed methods are shown to deliver excellent performance. 43. Locally Consistent Previous studies have demonstrated that document Knowledge Concept clustering performance can be improved significantly in and Data Factorization for lower dimensional linear subspaces. Recently, matrix Engineering Document Clustering factorization-based techniques, such as Nonnegative Matrix Factorization (NMF) and Concept Factorization (CF), have yielded impressive results. However, both of them effectively see only the global euclidean geometry, whereas the local manifold geometry is not fully considered. In this paper, we propose a new approach to extract the document concepts which are consistent with the manifold geometry such that each concept corresponds to a connected component. Central to our approach is a graph model which captures the local geometry of the document submanifold. Thus, we call it Locally Consistent Concept Factorization (LCCF). By using the graph Laplacian to smooth the document-to- concept mapping, LCCF can extract concepts with respect to the intrinsic manifold structure and thus documents associated with the same concept can be well clustered. The experimental results on TDT2 and Reuters-21578 have shown that the proposed approach provides a better representation and achieves better clustering results in terms of accuracy and mutual information. 44. Knowledge Service mashup is the act of integrating the resulting Knowledge Discovery in Services data of two complementary software services into a and Data (KDS): Aggregating common picture. Such an approach is promising with Engineering Software Services to respect to the discovery of new types of knowledge. Discover Enterprise However, before service mashup routines can be Mashups executed, it is necessary to predict which services (of an open repository) are viable candidates. Similar to Knowledge Discovery in Databases (KDD), we introduce the Knowledge Discovery in Services (KDS) process that identifies mashup candidates. In this work, the KDS
  • 30. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 process is specialized to address a repository of open services that do not contain semantic annotations. In these situations, specialized techniques are required to determine equivalences among open services with reasonable precision. This paper introduces a bottom-up process for KDS that adapts to the environment of services for which it operates. Detailed experiments are discussed that evaluate KDS techniques on an open repository of services from the Internet and on a repository of services created in a controlled environment. 45. Design and The intrusion response component of an overall Knowledge Implementation of intrusion detection system is responsible for issuing a and Data an Intrusion suitable response to an anomalous request. We propose Engineering Response System for the notion of database response policies to support our Relational Databases intrusion response system tailored for a DBMS. Our interactive response policy language makes it very easy for the database administrators to specify appropriate response actions for different circumstances depending upon the nature of the anomalous request. The two main issues that we address in context of such response policies are that of policy matching, and policy administration. For the policy matching problem, we propose two algorithms that efficiently search the policy database for policies that match an anomalous request. We also extend the PostgreSQL DBMS with our policy matching mechanism, and report experimental results. The experimental evaluation shows that our techniques are very efficient. The other issue that we address is that of administration of response policies to prevent malicious modifications to policy objects from legitimate users. We propose a novel Joint Threshold Administration Model (JTAM) that is based on the principle of separation of duty. The key idea in JTAM is that a policy object is jointly administered by at least k database administrator (DBAs), that is, any modification made to a policy object will be invalid unless it has been authorized by at least k DBAs. We present design details of JTAM which is based on a cryptographic threshold signature scheme, and show how JTAM prevents malicious modifications to policy objects from authorized users. We also implement JTAM in the PostgreSQL DBMS, and report experimental results on the efficiency of our techniques. 46. Automatic Discovery An individual is typically referred by numerous name Knowledge of Personal Name aliases on the web. Accurate identification of aliases of a and Data Aliases from the Web given person name is useful in various web related tasks Engineering such as information retrieval, sentiment analysis, personal name disambiguation, and relation extraction. We propose a method to extract aliases of a given personal name from the web. Given a personal name, the proposed method first extracts a set of candidate aliases. Second, we rank the extracted candidates according to the likelihood of a candidate being a correct alias of the
  • 31. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 given name. We propose a novel, automatically extracted lexical pattern-based approach to efficiently extract a large set of candidate aliases from snippets retrieved from a web search engine. We define numerous ranking scores to evaluate candidate aliases using three approaches: lexical pattern frequency, word co- occurrences in an anchor text graph, and page counts on the web. To construct a robust alias detection system, we integrate the different ranking scores into a single ranking function using ranking support vector machines. We evaluate the proposed method on three data sets: an English personal names data set, an English place names data set, and a Japanese personal names data set. The proposed method outperforms numerous baselines and previously proposed name alias extraction methods, achieving a statistically significant mean reciprocal rank (MRR) of 0.67. Experiments carried out using location names and Japanese personal names suggest the possibility of extending the proposed method to extract aliases for different types of named entities, and for different languages. Moreover, the aliases extracted using the proposed method are successfully utilized in an information retrieval task and improve recall by 20 percent in a relation-detection task. 47. Classification and Most existing data stream classification techniques Knowledge Novel Class Detection ignore one important aspect of stream data: arrival of a and Data in Concept-Drifting novel class. We address this issue and propose a data Engineering Data Streams under stream classification technique that integrates a novel Time Constraints class detection mechanism into traditional classifiers, enabling automatic detection of novel classes before the true labels of the novel class instances arrive. Novel class detection problem becomes more challenging in the presence of concept-drift, when the underlying data distributions evolve in streams. In order to determine whether an instance belongs to a novel class, the classification model sometimes needs to wait for more test instances to discover similarities among those instances. A maximum allowable wait time Tc is imposed as a time constraint to classify a test instance. Furthermore, most existing stream classification approaches assume that the true label of a data point can be accessed immediately after the data point is classified. In reality, a time delay Tl is involved in obtaining the true label of a data point since manual labeling is time consuming. We show how to make fast and correct classification decisions under these constraints and apply them to real benchmark data. Comparison with state-of-the-art stream classification techniques prove the superiority of our approach. 48. A Machine Learning The Machine Learning (ML) field has gained its Knowledge Approach for momentum in almost any domain of research and just and Data Identifying Disease- recently has become a reliable tool in the medical Engineering Treatment Relations domain. The empirical domain of automatic learning is in Short Texts used in tasks such as medical decision support, medical imaging, protein-protein interaction, extraction of medical knowledge, and for overall patient management care. ML is envisioned as a tool by which computer-
  • 32. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 based systems can be integrated in the healthcare field in order to get a better, more efficient medical care. This paper describes a ML-based methodology for building an application that is capable of identifying and disseminating healthcare information. It extracts sentences from published medical papers that mention diseases and treatments, and identifies semantic relations that exist between diseases and treatments. Our evaluation results for these tasks show that the proposed methodology obtains reliable outcomes that could be integrated in an application to be used in the medical care domain. The potential value of this paper stands in the ML settings that we propose and in the fact that we outperform previous results on the same data set.
  • 33. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 MATLAB 2011 1. Face Recognition by Information jointly contained in image space, scale and Image Exploring Information orientation domains can provide rich important clues not seen in Processing Jointly in Space, Scale and either individual of these domains. The position, spatial frequency and orientation selectivity properties are believed to Orientation have an important role in visual perception. This paper proposes a novel face representation and recognition approach by exploring information jointly in image space, scale and orientation domains. Specifically, the face image is first decomposed into different scale and orientation responses by convolving multiscale and multi-orientation Gabor filters. Second, local binary pattern analysis is used to describe the neighboring relationship not only in image space, but also in different scale and orientation responses. This way, information from different domains is explored to give a good face representation for recognition. Discriminant classification is then performed based upon weighted histogram intersection or conditional mutual information with linear discriminant analysis techniques. Extensive experimental results on FERET, AR, and FRGC ver 2.0 databases show the significant advantages of the proposed method over the existing ones. 2. Detection of Architectural We present methods for the detection of sites of architectural Image Distortion in Prior distortion in prior mammograms of interval-cancer cases. We Processing Mammograms hypothesize that screening mammograms obtained prior to the detection of cancer could contain subtle signs of early stages of breast cancer, in particular, architectural distortion. The methods are based upon Gabor filters, phase portrait analysis, a novel method for the analysis of the angular spread of power, fractal analysis, Laws' texture energy measures derived from geometrically transformed regions of interest (ROIs), and Haralick's texture features. With Gabor filters and phase portrait analysis, 4224 ROIs were automatically obtained from 106 prior mammograms of 56 interval-cancer cases, including 301 true- positive ROIs related to architectural distortion, and from 52 mammograms of 13 normal cases. For each ROI, the fractal dimension, the entropy of the angular spread of power, 10 Laws' measures, and Haralick's 14 features were computed. The areas under the receiver operating characteristic curves obtained using the features selected by stepwise logistic regression and the leave-one-ROI-out method are 0.76 with the Bayesian classifier, 0.75 with Fisher linear discriminant analysis, and 0.78 with a single-layer feed-forward neural network. Free-response receiver operating characteristics indicated sensitivities of 0.80 and 0.90 at 5.8 and 8.1 false positives per image, respectively, with the Bayesian classifier and the leave-one-image-out method.
  • 34. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 3. Enhanced Assessment of the With the widespread use of digital cameras, freehand wound Image Wound-Healing Process by imaging has become common practice in clinical settings. There Processing Accurate Multiview Tissue is however still a demand for a practical tool for accurate wound Classification healing assessment, combining dimensional measurements and tissue classification in a single user-friendly system. We achieved the first part of this objective by computing a 3-D model for wound measurements using uncalibrated vision techniques. We focus here on tissue classification from color and texture region descriptors computed after unsupervised segmentation. Due to perspective distortions, uncontrolled lighting conditions and view points, wound assessments vary significantly between patient examinations. The main contribution of this paper is to overcome this drawback with a multiview strategy for tissue classification, relying on a 3-D model onto which tissue labels are mapped and classification results merged. The experimental classification tests demonstrate that enhanced repeatability and robustness are obtained and that metric assessment is achieved through real area and volume measurements and wound outline extraction. This innovative tool is intended for use not only in therapeutic follow-up in hospitals but also for telemedicine purposes and clinical research, where repeatability and accuracy of wound assessment are critical. 4. A New Supervised This paper presents a new supervised method for blood vessel Image Method for Blood Vessel detection in digital retinal images. This method uses a neural Processing Segmentation in Retinal network (NN) scheme for pixel classification and computes a 7-D Images by Using Gray- vector composed of gray-level and moment invariants-based Level and Moment Invariants-Based Features features for pixel representation. The method was evaluated on the publicly available DRIVE and STARE databases, widely used for this purpose, since they contain retinal images where the vascular structure has been precisely marked by experts. Method performance on both sets of test images is better than other existing solutions in literature. The method proves especially accurate for vessel detection in STARE images. Its application to this database (even when the NN was trained on the DRIVE database) outperforms all analyzed segmentation approaches. Its effectiveness and robustness with different image conditions, together with its simplicity and fast implementation, make this blood vessel segmentation proposal suitable for retinal image computer analyses such as automated screening for early diabetic retinopathy detection. 5. Graph Run-Length Matrices The histopathological examination of tissue specimens is Image for Histopathological Image essential for cancer diagnosis and grading. However, this Processing Segmentation examination is subject to a considerable amount of observer variability as it mainly relies on visual interpretation of pathologists. To alleviate this problem, it is very important to develop computational quantitative tools, for which image segmentation constitutes the core step. In this paper, we introduce an effective and robust algorithm for the
  • 35. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 segmentation of histopathological tissue images. This algorithm incorporates the background knowledge of the tissue organization into segmentation. For this purpose, it quantifies spatial relations of cytological tissue components by constructing a graph and uses this graph to define new texture features for image segmentation. This new texture definition makes use of the idea of gray-level run-length matrices. However, it considers the runs of cytological components on a graph to form a matrix, instead of considering the runs of pixel intensities. Working with colon tissue images, our experiments demonstrate that the texture features extracted from “graph run-length matrices” lead to high segmentation accuracies, also providing a reasonable number of segmented regions. Compared with four other segmentation algorithms, the results show that the proposed algorithm is more effective in histopathological image segmentation. 6. X-ray Categorization and In this study we present an efficient image categorization and Image Retrieval on the Organ and retrieval system applied to medical image databases, in Processing Pathology Level, Using particular large radiograph archives. The methodology is based on local patch representation of the image content, using a “bag Patch-Based Visual Words of visual words” approach. We explore the effects of various parameters on system performance, and show best results using dense sampling of simple features with spatial content, and a nonlinear kernel-based support vector machine (SVM) classifier. In a recent international competition the system was ranked first in discriminating orientation and body regions in X-ray images. In addition to organ-level discrimination, we show an application to pathology-level categorization of chest X-ray data, the most popular examination in radiology. The system discriminates between healthy and pathological cases, and is also shown to successfully identify specific pathologies in a set of chest radiographs taken from a routine hospital examination. This is a first step towards similarity-based categorization, which has a major clinical implications for computer-assisted diagnostics 7. Standard Deviation for This letter proposes a new technique of restoring images Image Obtaining the Optimal distorted by random-valued impulse noise. The detection Processing Direction in the Removal of process is based on finding the optimum direction, by calculating Impulse Noise the standard deviation in different directions in the filtering window. The tested pixel is deemed original if it is similar to the pixels in the optimum direction. Extensive simulations prove that the proposed technique has superior performance, when compared to other existing methods, especially at high noise rates. 8. Removal of High Density Salt A modified decision based unsymmetrical trimmed median filter Image and Pepper Noise Through algorithm for the restoration of gray scale, and color images that Processing Modified Decision Based are highly corrupted by salt and pepper noise is proposed in this Unsymmetric Trimmed paper. The proposed algorithm replaces the noisy pixel by
  • 36. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 Median Filter trimmed median value when other pixel values, 0's and 255's are present in the selected window and when all the pixel values are 0's and 255's then the noise pixel is replaced by mean value of all the elements present in the selected window. This proposed algorithm shows better results than the Standard Median Filter (MF), Decision Based Algorithm (DBA), Modified Decision Based Algorithm (MDBA), and Progressive Switched Median Filter (PSMF). The proposed algorithm is tested against different grayscale and color images and it gives better Peak Signal-to- Noise Ratio (PSNR) and Image Enhancement Factor (IEF). 9. IMAGE Resolution In this correspondence, the authors propose an image resolution Image Enhancement by Using enhancement technique based on interpolation of the high Processing Discrete and Stationary frequency subband images obtained by discrete wavelet Wavelet Decomposition transform (DWT) and the input image. The edges are enhanced by introducing an intermediate stage by using stationary wavelet transform (SWT). DWT is applied in order to decompose an input image into different subbands. Then the high frequency subbands as well as the input image are interpolated. The estimated high frequency subbands are being modified by using high frequency subband obtained through SWT. Then all these subbands are combined to generate a new high resolution image by using inverse DWT (IDWT). The quantitative and visual results are showing the superiority of the proposed technique over the conventional and state-of-art image resolution enhancement techniques. 10. Automatic Optic Disc Under the framework of computer-aided eye disease diagnosis, Image Detection From Retinal this paper presents an automatic optic disc (OD) detection Processing Images by a Line Operator technique. The proposed technique makes use of the unique circular brightness structure associated with the OD, i.e., the OD usually has a circular shape and is brighter than the surrounding pixels whose intensity becomes darker gradually with their distances from the OD center. A line operator is designed to capture such circular brightness structure, which evaluates the image brightness variation along multiple line segments of specific orientations that pass through each retinal image pixel. The orientation of the line segment with the minimum/maximum variation has specific pattern that can be used to locate the OD accurately. The proposed technique has been tested over four public datasets that include 130, 89, 40, and 81 images of healthy and pathological retinas, respectively. Experiments show that the designed line operator is tolerant to different types of retinal lesion and imaging artifacts, and an average OD detection accuracy of 97.4% is obtained. 11. Wavelet-Based Image Texture In this letter, we propose an efficient one-nearest-neighbor Image Classification Using Local classifier of texture via the contrast of local energy histograms of Processing Energy Histograms all the wavelet subbands between an input texture patch and each sample texture patch in a given training set. In particular,
  • 37. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024 http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028 the contrast is realized with a discrepancy measure which is just a sum of symmetrized Kullback-Leibler divergences between the input and sample local energy histograms on all the wavelet subbands. It is demonstrated by various experiments that our proposed method obtains a satisfactory texture classification accuracy in comparison with several current state-of-the-art texture classification approaches. 12. A Ringing-Artifact This paper proposes a new ringing-artifact reduction Image Reduction Method for method for image resizing in a block discrete cosine Processing Block-DCT-Based Image transform (DCT) domain. The proposed method reduces Resizing ringing artifacts without further blurring, whereas previous approaches must find a compromise between blurring and ringing artifacts. The proposed method consists of DCT- domain filtering and image-domain post-processing, which reduces ripples on smooth regions as well as overshoot near strong edges. By generating a mask map of the overshoot regions, we combine a ripple-reduced image and an overshoot-reduced image according to the mask map in the image domain to obtain a ringing-artifact reduced image. The experimental results show that the proposed method is computationally faster and produces visually finer images than previous ringing-artifact reduction approaches.