SlideShare a Scribd company logo
SEMINAR BY:   GUIDED BY:
JEEVITHA R    Ms VIDYA S BENNUR
1ec08ec018
CONTENTS
•Introduction
•What is speech?
•Sources of information
•Brain computer interface (BCI)
•Speech synthesis
•speech synthesis technologies
•Block diagram
•Features
• Methods of producing
            Electromyography
            Image processing
• Applications
•In fiction
•Reference
•You are in a theatre or a noisy restaurant or a bus etc., where there is lot of
noise around is a big issue while talking on a mobile phone. But in future this
problem is eliminated with “silent sound technology”, a new technology
unveiled at the CeBIT fare. It transforms lip movements into a computer
generated voice for the listener at the other end of the listener
• Silent speech is a device that allows speech communication without using
the sound made when people vocalize their speech sounds. As such it is a
type of electronic lip reader. It works by computer identifying phonemes that
an individual pronounces from non auditory sources of information about
their speech movements. These are then used to recreate the speech using
speech synthesis
•The device uses electromyography, monitoring tiny muscular movements
that occur when we speak and converting them into electrical pulses that
can be turned into speech without a sound uttered. It also uses image
processing technique that converts digital data into a film image with
minimal corrections and calibration.
 Speech is the vocalized form of human communication. It is based upon the
syntactic combination of lexical and names that are drawn from very large
(usually to about 10,000 different words) vocabularies.

 A gestural form of human communication exists for the deaf in the form of
sign language. Speech in some cultures has become the basis of a written
language, often one that differs in its vocabulary, syntax and phonetics from its
associated spoken one, a situation called diglossia
Sources of information:

Vocal tract      Bone conduction
The vocal tract is the cavity in human beings and in
animals where sound that is produced at the sound source (larynx in mammals;
syrinx in birds) is filtered.
Bone conduction is the conduction of sound to the inner ear through the bones
of the skull.
Some hearing aids employ bone conduction, achieving an effect equivalent to
hearing directly by means of the ears. A headset is ergonomically positioned on the
temple and cheek and the electromechanical transducer, which converts electric
signals into mechanical vibrations, sends sound to the internal ear through the
cranial bones. Likewise, a microphone can be used to record spoken sounds via
bone conduction. The first description, in 1923, of a bone conduction hearing aid
was Hugo Gernsback’s "Osophone", which he later elaborated on with his
"Phonosone".
Categories:
•Ordinary products
•Hearing aids
•Specialized communication products


     Advantages:
     Ears free
     High sound clarity in very noisy environment
     Can have a perception of stereo sound


         Disadvantages:
         Some implementations require more power than headphones.
         Less clear recording and playback than headphones.
A brain computer interface is often called as mind machine interface(MMI)or
sometimes called direct neural interface is a direct communication pathway
between the brain and an external device

The field of BCI research and development has since focused primarily on
neuroprosthetics applications that aim at restoring damaged hearing, sight and
movement. Thanks to the remarkable cortical plasticity of the brain, signals from
implanted prostheses can, after adaptation, be handled by the brain like natural
sensor or effecter channels. Following years of animal experimentation, the first
neuroprosthetic devices implanted in humans appeared in the mid- 90s
Speech synthesis is the artificial production of human speech. A
computer system used for this purpose is called a speech
synthesizer, and can be implemented in software or hardware.
 Synthesized speech can be created by concatenating pieces of
recorded speech that are stored in a database. Systems differ in the
size of the stored speech units; a system that stores phones or
di phones provides the largest output range, but may lack clarity.
Speech synthesizing process:




The quality of a speech synthesizer is judged by its it’s similarity to the human
voice and by its ability to be understood. An intelligible text-to-speech program
allows people with visual impairments or reading disabilities to listen to written
works on a home computer. Many computer operating systems have included
speech synthesizers since the early 1980’s.
The most important qualities of speech synthesis system are naturalness and
intelligibility . Naturalness describes how closely the output sounds like human
speech, while intelligibility is the ease with which the output is understood.

     There are 8 types of Synthesizing technologies such that they are :
 a) Concatenative synthesis
 b) Unit selection synthesis
c) Di phone synthesis
d) Domain-specific synthesis
e) Formant synthesis
f) Articulatory synthesis
g) HMM-based synthesis
h) Sine wave synthesis
CONCATENATIVE SYNTHESIS:
         Concatenative synthesis is based on the concatenation (or stringing
together) of segments of recorded speech. Generally, Concatenative synthesis
produces the most natural-sounding synthesized speech.

UNIT SELECTION SYNTHESIS:
         Unit selection synthesis uses large databases of recorded speech. During
database creation, each recorded utterance is segmented into some or all of the
following: individual phones, di phones, half-
phones, syllables, morphemes, words, phrases, and sentences.

   DI PHONE SYNTHESIS:
        Di phone synthesis uses a minimal speech database containing all the di
phones(sound-to-sound transitions) occurring in a language. The number of di
phones depends on the phonotactics of the language: for example, Spanish has
about 800 di phones and German about 2500. In di phone synthesis, only one
example of each di phone is contained in the speech database.
Domain specific synthesis:
          Domain-specific synthesis concatenates prerecorded words and
phrases to create complete utterances. It is used in applications where the
variety of texts the system will output is limited to a particular domain, like
transit schedule announcements or weather reports.

Format synthesis:
    Format synthesis does not use human speech samples at runtime.
Instead the synthesized speech output is created using additive synthesis
and an acoustic model (physical modeling synthesis). Parameters such as
fundamental frequency, voicing, and noise levels are varied over time to
create a waveform of artificial speech. This method is sometimes called
rules-based synthesis
ARTICULATORY SYNTHESIS:
         Articulatory synthesis refers to computational techniques for
synthesizing speech based on models of the human vocal tract and the
articulation processes occurring there. Until recently, articulatory synthesis
models have not been incorporated into commercial speech synthesis
systems.

HMM BASED SYNTHESIS:
HMM-based synthesis is a synthesis method based on hidden Markov
models, also called Statistical Parametric Synthesis. In this system, the
frequency spectrum (vocal tract), fundamental frequency (vocal
source), and duration (prosody) of speech are modeled simultaneously by
HMMs. Speech waveforms are generated from HMMs themselves based on
the maximum likelihood criterion.
SINE WAVE SYNTHESIS:
       Sine wave synthesis is a technique for synthesizing
speech by replacing the formants (main bands of energy) with
pure tone whistles.
BLOCK DIAGRAM :
FEATURES:
       AUDIO SPOTLIGHT:
The Audio Spotlight transmitters generate a column of sound between
three and five degrees wider than the transmitter. It converts ordinary
audio into high-frequency ultrasonic signals that are outside the range of
normal hearing. As these sound waves push out from the source, they
interact with air pressure to create audible sounds.
Sound field distribution is shown with equal loudness contours for a
standard 1 KHz tone. The center area is louder at 100% amplitude, while
the sound level just outside the illustrated beam area is less than 10%.
Audio spotlight systems are much sensitive to listener distance than
traditional loudspeakers, but maximum performance is attained at roughly
1-2m (3-6feet) from the listener.
Typical levels are 80dB SPL at 1 KHz for As-16 and 85dB SPL for AS-24
models. The larger AS-24 can output about twice the power and twice low
frequency range.
This simulation is fixed for fixed source size(0.4m/16”) with varying wavelength.
From the statements above, we expect to see an unidirectional response for a large
wavelength relative to source, and higher directivity as wavelength decreases.
METHODS OF PRODUCING


ELECTROMYOGRAPHY    IMAGE PROCESSING
ELECTROMYOGRAPHY:
        It is a technique for evaluating and recording the electrical activity
produced by skeletal muscles. EMG is performed using an instrument
called an electromyography, to produce a record called an
electromyogram. An electromyography detects the electrical potential
generated by muscle cells when these cells are electrically or
neurologically activated.
Electromyographic sensors attached to the face records the electric signals
produced by the facial muscles, compare them with pre recorded signal
pattern of spoken words .

When there is a match that sound is transmitted on to the other end of the
line and person at the other end listen to the spoken words.
For such an interface ,we should use 4 kinds of TRANSDUCERS . They are as
 follows :-
   1.Vibration sensors
   2.Pressure sensor
   3.Electromagnetic sensor
   4.Motion sensor


IMAGE POCESSING:
•The simplest form of image processing converts the data tape into a film image
with minimal corrections and calibrations.
Digital data

                             Pre processing

                            Feature extraction


Image enhancement           Selection of training
                                    data
Manual interpretation
                         Decision and classification           Ancillary data

                        Supervised            Unsupervised

                              Classification output

                            Post processing operation

                                 Assess memory

           Maps and imageries         Reports           Data
As we know in space there is no medium for sound to travel therefore this
technology can be best utilized by astronauts.

We can make silent calls even if we are standing in a crowded place.

This technology is helpful for people without vocal cord or those who are
suffering from Aphasia (speaking disorder ).

This technology can be used for communication In nasty environment.

To tell a secret PIN no. , or credit card no. on the phone now be easy as there is
no one eavesdrop anymore.

Since the electrical signals are universal they can be translated into any language.
Native speakers can translate it before sending it to the other side. Hence it can be
converted into any language of choice currently being German, English & French.
Translation into majority of languages but for languages such as Chinese
different tone holds different meaning, facial movements being the same. Hence
this technology is difficult to apply in such situations.

From security point of view recognizing who you are talking to gets complicated.

Even differentiating between people and emotions cannot be done. This means
you will always feel you are talking to a robot.

This device presently needs nine leads to be attached to our face which is quite
impractical to make it usable.
 Silent sound technology gives way to a bright future to speech recognition
  technology from simple voice commands to memorandum dictated over the phone
  all this is fairly possible in noisy public places.

 Without having electrodes hanging all around your face, these electrodes will be

• It may have features like lip reading based on image recognition & processing
       rather than electromyography.
• Nano technology will be a mentionable step towards making the device handy.
 Engineers claim that the device is working with 99 percent efficiency.


 It is difficult to compare SSI technologies directly in a meaningful way. Since many
  of the systems are still preliminary, it would not make sense, for example, to
  compare speech recognition scores or synthesis quality at this stage.

 With a few abstractions, however, it is possible to shed light on the range of
  applicability and the potential for future commercialization of the different
  methods.
Silent sound interface
Silent sound interface
Silent sound interface

More Related Content

PPTX
Silent sound technology
PPTX
Silent sound technology_powerpoint
PPTX
Silent speech recognition
PPT
Silent sound technology
PDF
Silent Sound Technology
PPTX
Silent sound technology
DOC
Voice Morphing
PPTX
Silent sound technology
Silent sound technology
Silent sound technology_powerpoint
Silent speech recognition
Silent sound technology
Silent Sound Technology
Silent sound technology
Voice Morphing
Silent sound technology

What's hot (20)

PPTX
Silent Sound Technology
PDF
Monochrome TV Receiver
PPT
Silent sound technology
PPTX
Silent Sound Technology
PPTX
Silent Sound Technology
PPTX
E0ad silent sound technology
DOCX
Silent Sound Technology
PPTX
Silent sound technology NEW
PPT
Romain Rogister DSP ppt V2003
PDF
silent sound new by RAJ NIRANJAN
PPTX
Silent sound technologyrevathippt
PPTX
IBOC TECHNOLOGY
PPTX
Silent sound technology
PPT
Silent sound-technology ppt final
PPT
silent sound technology
PDF
Colout TV Fundamentals
PPTX
Silent sound technology
PPTX
Digital speech processing lecture1
PDF
Silentsound documentation
PPTX
Automatic Speech Recognition
Silent Sound Technology
Monochrome TV Receiver
Silent sound technology
Silent Sound Technology
Silent Sound Technology
E0ad silent sound technology
Silent Sound Technology
Silent sound technology NEW
Romain Rogister DSP ppt V2003
silent sound new by RAJ NIRANJAN
Silent sound technologyrevathippt
IBOC TECHNOLOGY
Silent sound technology
Silent sound-technology ppt final
silent sound technology
Colout TV Fundamentals
Silent sound technology
Digital speech processing lecture1
Silentsound documentation
Automatic Speech Recognition
Ad

Viewers also liked (8)

PPT
Detection of acoustic landmark
DOCX
Requerimientos de diseño para un museo de arte contemporáneo en la habana
PPTX
Acoustic echo cancellation
PPT
Voice morphing
DOCX
Computer engineering and it seminar topics
PPT
Artificial intelligence Speech recognition system
PPTX
Solar roadways presented by RK
PPT
Automatic speech recognition
Detection of acoustic landmark
Requerimientos de diseño para un museo de arte contemporáneo en la habana
Acoustic echo cancellation
Voice morphing
Computer engineering and it seminar topics
Artificial intelligence Speech recognition system
Solar roadways presented by RK
Automatic speech recognition
Ad

Similar to Silent sound interface (20)

PPTX
visH (fin).pptx
PDF
Survey On Speech Synthesis
PPTX
Speech synthesis technology
PPTX
Speech Synthesis - Christopher Mwololo Fred.pptx
DOC
12EEE032- text 2 voice
PPT
A study of EMG based Speech Recognition
PPTX
SILENT SOUND TECHNOLOGY
PPTX
Silent_Sound_Technology_using _electromyography & image processing [1].pptx
PPTX
Speech Synthesis.pptx
PDF
Vocal Translation For Muteness People Using Speech Synthesizer
PDF
Vocal Translation For Muteness People Using Speech Synthesizer
PPTX
50626996-silent-sound-technology.pptxfrr
PPTX
Silent-Sound-Technology-PPT.pptx
PDF
Neither Her Nor Hal: Considering Access and Representation in the Next Genera...
PDF
silent sound technology
PDF
Silent sound technology final report
PPTX
TEXT-SPEECH PPT.pptx
PPT
207952_VIJAY_MCA.ppt
PPTX
Seminar PPT - Shreya Suroliya.pptx
PPTX
speech processing basics
visH (fin).pptx
Survey On Speech Synthesis
Speech synthesis technology
Speech Synthesis - Christopher Mwololo Fred.pptx
12EEE032- text 2 voice
A study of EMG based Speech Recognition
SILENT SOUND TECHNOLOGY
Silent_Sound_Technology_using _electromyography & image processing [1].pptx
Speech Synthesis.pptx
Vocal Translation For Muteness People Using Speech Synthesizer
Vocal Translation For Muteness People Using Speech Synthesizer
50626996-silent-sound-technology.pptxfrr
Silent-Sound-Technology-PPT.pptx
Neither Her Nor Hal: Considering Access and Representation in the Next Genera...
silent sound technology
Silent sound technology final report
TEXT-SPEECH PPT.pptx
207952_VIJAY_MCA.ppt
Seminar PPT - Shreya Suroliya.pptx
speech processing basics

Recently uploaded (20)

PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PPTX
Cell Types and Its function , kingdom of life
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PDF
01-Introduction-to-Information-Management.pdf
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PPTX
Presentation on HIE in infants and its manifestations
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
RMMM.pdf make it easy to upload and study
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PDF
Anesthesia in Laparoscopic Surgery in India
PPTX
Pharma ospi slides which help in ospi learning
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
Microbial disease of the cardiovascular and lymphatic systems
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
Final Presentation General Medicine 03-08-2024.pptx
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
Cell Types and Its function , kingdom of life
human mycosis Human fungal infections are called human mycosis..pptx
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
01-Introduction-to-Information-Management.pdf
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
Presentation on HIE in infants and its manifestations
102 student loan defaulters named and shamed – Is someone you know on the list?
RMMM.pdf make it easy to upload and study
Pharmacology of Heart Failure /Pharmacotherapy of CHF
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Anesthesia in Laparoscopic Surgery in India
Pharma ospi slides which help in ospi learning
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
FourierSeries-QuestionsWithAnswers(Part-A).pdf
O5-L3 Freight Transport Ops (International) V1.pdf
Microbial disease of the cardiovascular and lymphatic systems

Silent sound interface

  • 1. SEMINAR BY: GUIDED BY: JEEVITHA R Ms VIDYA S BENNUR 1ec08ec018
  • 2. CONTENTS •Introduction •What is speech? •Sources of information •Brain computer interface (BCI) •Speech synthesis •speech synthesis technologies •Block diagram •Features • Methods of producing Electromyography Image processing • Applications •In fiction •Reference
  • 3. •You are in a theatre or a noisy restaurant or a bus etc., where there is lot of noise around is a big issue while talking on a mobile phone. But in future this problem is eliminated with “silent sound technology”, a new technology unveiled at the CeBIT fare. It transforms lip movements into a computer generated voice for the listener at the other end of the listener • Silent speech is a device that allows speech communication without using the sound made when people vocalize their speech sounds. As such it is a type of electronic lip reader. It works by computer identifying phonemes that an individual pronounces from non auditory sources of information about their speech movements. These are then used to recreate the speech using speech synthesis
  • 4. •The device uses electromyography, monitoring tiny muscular movements that occur when we speak and converting them into electrical pulses that can be turned into speech without a sound uttered. It also uses image processing technique that converts digital data into a film image with minimal corrections and calibration.
  • 5.  Speech is the vocalized form of human communication. It is based upon the syntactic combination of lexical and names that are drawn from very large (usually to about 10,000 different words) vocabularies.  A gestural form of human communication exists for the deaf in the form of sign language. Speech in some cultures has become the basis of a written language, often one that differs in its vocabulary, syntax and phonetics from its associated spoken one, a situation called diglossia
  • 6. Sources of information: Vocal tract Bone conduction
  • 7. The vocal tract is the cavity in human beings and in animals where sound that is produced at the sound source (larynx in mammals; syrinx in birds) is filtered.
  • 8. Bone conduction is the conduction of sound to the inner ear through the bones of the skull. Some hearing aids employ bone conduction, achieving an effect equivalent to hearing directly by means of the ears. A headset is ergonomically positioned on the temple and cheek and the electromechanical transducer, which converts electric signals into mechanical vibrations, sends sound to the internal ear through the cranial bones. Likewise, a microphone can be used to record spoken sounds via bone conduction. The first description, in 1923, of a bone conduction hearing aid was Hugo Gernsback’s "Osophone", which he later elaborated on with his "Phonosone".
  • 9. Categories: •Ordinary products •Hearing aids •Specialized communication products Advantages: Ears free High sound clarity in very noisy environment Can have a perception of stereo sound Disadvantages: Some implementations require more power than headphones. Less clear recording and playback than headphones.
  • 10. A brain computer interface is often called as mind machine interface(MMI)or sometimes called direct neural interface is a direct communication pathway between the brain and an external device The field of BCI research and development has since focused primarily on neuroprosthetics applications that aim at restoring damaged hearing, sight and movement. Thanks to the remarkable cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effecter channels. Following years of animal experimentation, the first neuroprosthetic devices implanted in humans appeared in the mid- 90s
  • 11. Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware.  Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units; a system that stores phones or di phones provides the largest output range, but may lack clarity.
  • 12. Speech synthesizing process: The quality of a speech synthesizer is judged by its it’s similarity to the human voice and by its ability to be understood. An intelligible text-to-speech program allows people with visual impairments or reading disabilities to listen to written works on a home computer. Many computer operating systems have included speech synthesizers since the early 1980’s.
  • 13. The most important qualities of speech synthesis system are naturalness and intelligibility . Naturalness describes how closely the output sounds like human speech, while intelligibility is the ease with which the output is understood. There are 8 types of Synthesizing technologies such that they are : a) Concatenative synthesis b) Unit selection synthesis c) Di phone synthesis d) Domain-specific synthesis e) Formant synthesis f) Articulatory synthesis g) HMM-based synthesis h) Sine wave synthesis
  • 14. CONCATENATIVE SYNTHESIS: Concatenative synthesis is based on the concatenation (or stringing together) of segments of recorded speech. Generally, Concatenative synthesis produces the most natural-sounding synthesized speech. UNIT SELECTION SYNTHESIS: Unit selection synthesis uses large databases of recorded speech. During database creation, each recorded utterance is segmented into some or all of the following: individual phones, di phones, half- phones, syllables, morphemes, words, phrases, and sentences.  DI PHONE SYNTHESIS: Di phone synthesis uses a minimal speech database containing all the di phones(sound-to-sound transitions) occurring in a language. The number of di phones depends on the phonotactics of the language: for example, Spanish has about 800 di phones and German about 2500. In di phone synthesis, only one example of each di phone is contained in the speech database.
  • 15. Domain specific synthesis: Domain-specific synthesis concatenates prerecorded words and phrases to create complete utterances. It is used in applications where the variety of texts the system will output is limited to a particular domain, like transit schedule announcements or weather reports. Format synthesis: Format synthesis does not use human speech samples at runtime. Instead the synthesized speech output is created using additive synthesis and an acoustic model (physical modeling synthesis). Parameters such as fundamental frequency, voicing, and noise levels are varied over time to create a waveform of artificial speech. This method is sometimes called rules-based synthesis
  • 16. ARTICULATORY SYNTHESIS: Articulatory synthesis refers to computational techniques for synthesizing speech based on models of the human vocal tract and the articulation processes occurring there. Until recently, articulatory synthesis models have not been incorporated into commercial speech synthesis systems. HMM BASED SYNTHESIS: HMM-based synthesis is a synthesis method based on hidden Markov models, also called Statistical Parametric Synthesis. In this system, the frequency spectrum (vocal tract), fundamental frequency (vocal source), and duration (prosody) of speech are modeled simultaneously by HMMs. Speech waveforms are generated from HMMs themselves based on the maximum likelihood criterion.
  • 17. SINE WAVE SYNTHESIS: Sine wave synthesis is a technique for synthesizing speech by replacing the formants (main bands of energy) with pure tone whistles.
  • 19. FEATURES:  AUDIO SPOTLIGHT: The Audio Spotlight transmitters generate a column of sound between three and five degrees wider than the transmitter. It converts ordinary audio into high-frequency ultrasonic signals that are outside the range of normal hearing. As these sound waves push out from the source, they interact with air pressure to create audible sounds. Sound field distribution is shown with equal loudness contours for a standard 1 KHz tone. The center area is louder at 100% amplitude, while the sound level just outside the illustrated beam area is less than 10%. Audio spotlight systems are much sensitive to listener distance than traditional loudspeakers, but maximum performance is attained at roughly 1-2m (3-6feet) from the listener. Typical levels are 80dB SPL at 1 KHz for As-16 and 85dB SPL for AS-24 models. The larger AS-24 can output about twice the power and twice low frequency range.
  • 20. This simulation is fixed for fixed source size(0.4m/16”) with varying wavelength. From the statements above, we expect to see an unidirectional response for a large wavelength relative to source, and higher directivity as wavelength decreases.
  • 22. ELECTROMYOGRAPHY: It is a technique for evaluating and recording the electrical activity produced by skeletal muscles. EMG is performed using an instrument called an electromyography, to produce a record called an electromyogram. An electromyography detects the electrical potential generated by muscle cells when these cells are electrically or neurologically activated.
  • 23. Electromyographic sensors attached to the face records the electric signals produced by the facial muscles, compare them with pre recorded signal pattern of spoken words . When there is a match that sound is transmitted on to the other end of the line and person at the other end listen to the spoken words.
  • 24. For such an interface ,we should use 4 kinds of TRANSDUCERS . They are as follows :- 1.Vibration sensors 2.Pressure sensor 3.Electromagnetic sensor 4.Motion sensor IMAGE POCESSING: •The simplest form of image processing converts the data tape into a film image with minimal corrections and calibrations.
  • 25. Digital data Pre processing Feature extraction Image enhancement Selection of training data Manual interpretation Decision and classification Ancillary data Supervised Unsupervised Classification output Post processing operation Assess memory Maps and imageries Reports Data
  • 26. As we know in space there is no medium for sound to travel therefore this technology can be best utilized by astronauts. We can make silent calls even if we are standing in a crowded place. This technology is helpful for people without vocal cord or those who are suffering from Aphasia (speaking disorder ). This technology can be used for communication In nasty environment. To tell a secret PIN no. , or credit card no. on the phone now be easy as there is no one eavesdrop anymore. Since the electrical signals are universal they can be translated into any language. Native speakers can translate it before sending it to the other side. Hence it can be converted into any language of choice currently being German, English & French.
  • 27. Translation into majority of languages but for languages such as Chinese different tone holds different meaning, facial movements being the same. Hence this technology is difficult to apply in such situations. From security point of view recognizing who you are talking to gets complicated. Even differentiating between people and emotions cannot be done. This means you will always feel you are talking to a robot. This device presently needs nine leads to be attached to our face which is quite impractical to make it usable.
  • 28.  Silent sound technology gives way to a bright future to speech recognition technology from simple voice commands to memorandum dictated over the phone all this is fairly possible in noisy public places.  Without having electrodes hanging all around your face, these electrodes will be • It may have features like lip reading based on image recognition & processing rather than electromyography. • Nano technology will be a mentionable step towards making the device handy.
  • 29.  Engineers claim that the device is working with 99 percent efficiency.  It is difficult to compare SSI technologies directly in a meaningful way. Since many of the systems are still preliminary, it would not make sense, for example, to compare speech recognition scores or synthesis quality at this stage.  With a few abstractions, however, it is possible to shed light on the range of applicability and the potential for future commercialization of the different methods.