SlideShare a Scribd company logo
International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013
DOI : 10.5121/vlsic.2013.4103 29
MATRIX CODE BASED MULTIPLE ERROR
CORRECTION TECHNIQUE FOR N-BIT MEMORY
DATA
Sunita M.S1
and Kanchana Bhaaskaran V.S2
1
VIT University, Chennai Campus, Chennai, India
PESIT, Bangalore, India
sunitha@pes.edu
2
VIT University, Chennai Campus, Chennai, India
kanchana.vs@vit.ac.in
ABSTRACT
Constant shrinkage in the device dimensions has resulted in very dense memory cells. The probability of
occurrence of multiple bit errors is much higher in very dense memory cells. Conventional Error
Correcting Codes (ECC) cannot correct multiple errors in memories even though many of these are
capable of detecting multiple errors. This paper presents a novel decoding algorithm to detect and correct
multiple errors in memory based on Matrix Codes. The algorithm used is such that it can correct a
maximum of eleven errors in a 32-bit data and a maximum of nine errors in a 16-bit data. The proposed
method can be used to improve the memory yield in presence of multiple-bit upsets. It can be applied for
correcting burst errors wherein, a continuous sequence of data bits are affected when high energetic
particles from external radiation strike memory, and cause soft errors. The proposed technique performs
better than the previously known technique of error detection and correction using Matrix Codes.
KEYWORDS
Memory testing, Error correction codes, Matrix codes, multiple error detection, multiple error correction.
1. INTRODUCTION
Embedded memories play an important role in the semiconductor market because the system-on-
chip market is booming and almost every system chip contains some type of embedded memory.
There is a prediction that embedded memories will dominate more than 90% of the system chip
area in the next few years. High-density, low-voltage levels, small feature size and small noise
margins make the memory chips increasingly susceptible to faults or soft errors [1]. Errors
introduced due to the external radiation or electrical noise rather than the design or manufacturing
defects are known as soft errors. They are caused by high energy neutrons and alpha particles
hitting the silicon bulk resulting in the production of large number of electron-hole pairs. The
accumulated charge may be sufficient to flip the value stored in a cell thus causing bit inversion,
resulting in soft error [2]. Hence the effects of radiation are bit-flips occurring in the information
stored in memory elements. Due to the relentless shrinkage in the device dimensions, the particles
that were once considered negligible are now proving to be significant enough to cause upsets [3].
Such errors are identified as soft errors since, although they corrupt the value stored in the cell,
they do not permanently damage the hardware.
Soft errors can be either single-event upset (SEU), where an ionizing particle affects a single bit
or multiple-bit upset (MBU), where more than one bit is upset during a single measurement.
Burst error can be defined as an error pattern, generally in a binary signal, that consists of known
International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013
30
positions where the digit is in error (first and last) with the intervening digits possibly in error and
possibly not. By implication, the digits before the first error in the block and after the last error in
the block are correct [4]. The burst errors occur during short intervals of time and hence corrupt a
set of adjacent bits in that duration. Depending on the underlying technology and the incident
particle, several types of multiple-bit errors are possible [5][6]. It has been shown that incident
neutron particles can react with the die contaminant and generate secondary particles with enough
energy to create multiple errors.
Testing embedded memory is more difficult than testing independent memory, in general, unless
built-in self diagnosis techniques are used. Some of the common approaches to protect memories
are
1) Built-in Current sensors (BICS) that can detect occurrence of errors by detecting changes
in current. The sensors are placed in the columns of the memory blocks and they detect
unexpected current variations on each of the memory bit positions [7]. The BICS
consumes very little power during testing and no power when testing is finished.
Furthermore, it can screen out defects that escape other test methods and are very
effective in defect diagnosis. However, it has the drawback that it can only detect errors
and it does not have error correction capability.
2) Built-in Self-Test (BIST), which uses various algorithms such as the March pattern,
pseudo-random patterns and MATS patterns to test the functionality of the RAMs [8].
They not only detect the presence of faults, but also specify their locations for repair.
Although, very effective in functional testing of the RAMs and subsequent error
detection, they do not have error correction capability.
3) Built-in Self Repair (BISR)/Built-in Redundancy Analysis (BIRA) is an extension of
BIST. It uses the Replacement Algorithm, wherein the cells of a memory identified as
defective by process of BIST are corrected by replacing the corresponding row/column by
spare rows/columns [9][10]. Though easier to repair the faulty cells, this approach is
inefficient, due to the fact that more redundant rows and columns are required to achieve
sufficient chip yield.
4) Design-for-test (DFT) techniques are aids to enable detection of defects. A DFT
technique involves modifying a memory design to make it testable.
5) Interleaving in the physical arrangement of memory cells, such that the cells belonging to
the same logical word are separated. This can prevent MBUs since the physically adjacent
bits that are affected by MBU belong to different words. This causes single errors in
different words, which can be easily detected and corrected. However, interleaving can
have impact on floor-planning, the access time and power consumption [11].
6) Error Correcting Codes: Most common approach to maintain a good level of reliability.
ECC techniques are well understood and relatively inexpensive in terms of the extra
circuitry required.
The rest of the paper is organized as follows. Section 2 provides a brief survey on the various
error-correcting codes used with memory. Section 3 describes the Matrix Codes and the algorithm
used for error detection and correction. The proposed architecture is explained in Section 4. The
implementation method is explained in Section 5. Section 6 presents the results and discussion.
Section 7 concludes the paper. Finally Section 8 provides an insight into the future work which is
in progress.
International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013
31
2. SURVEY OF VARIOUS ECC SCHEMES USED FOR MEMORY
There are various Error Correcting Codes used for error detection and correction in memory.
Hamming Codes are largely used to correct SEUs in memory due to their ability to correct single
errors with reduced area and performance overhead [12]. Though excellent for correction of
single errors in a data word, they cannot correct double bit errors caused by single event upset. An
extension of the basic SEC-DED Hamming Code has been proposed to form a special class of
codes known as Hsiao Codes to improve the speed, cost and reliability of the decoding logic [13].
One more class of SEC-DED codes known as Single-error-correcting, Double-error-detecting
Single-byte-error-detecting SEC-DED-SBD codes were proposed to detect any number of errors
affecting a single byte. These codes are more suitable than the conventional SEC-DED codes for
protecting the byte-organized memories [14][15]. Though they operate with lesser overhead and
are good for multiple error detection, they cannot correct multiple errors.
There are additional codes such as the single-byte-error-correcting, double-byte-error-detecting
(SBC-DBD) codes, double-error-correcting, triple-error-detecting (DEC-TED) codes that can
correct multiple errors as discussed in [9]. The Single-error-correcting, Double-error-detecting
and Double-adjacent-error-correcting (SEC-DED-DAEC) code provides a low cost ECC
methodology to correct adjacent errors as proposed in [11]. The only drawback with this code is
the possibility of miscorrection for a small subset of multiple errors.
The Reed-Solomon (RS) code and Bose-Chaudhuri-Hocquenghem (BCH) Codes are capable of
detecting and correcting multiple bytes of errors with low overhead. However, they work at the
block level and normally are applied to multiple words at a time [11]. Hsiao M.Y et. al. [16] also
proposed a new class of multiple error correcting codes called Orthogonal Latin Square Code,
which belong to the class of one-step-decodable majority code, and which can be decoded at an
exceptionally high speed.
The matrix codes combine the Hamming and parity codes to improve reliability and yield of
memory chips even in the presence of high defects and multiple bit upsets [17]. This paper
presents a new decoding algorithm to detect and correct multiple errors using Matrix Codes.
3. MATRIX CODES
The n-bit data word is stored in a matrix format such that n = k1 x k2, where k1and k2 represent
the number of rows and columns respectively. For each of the k1 rows, check bits are added and
another k2 bits are added as vertical parity bits. The technique is explained by considering a data
word length of 32 bits. The 32-bit word is stored in a 4x8 matrix with 4 rows and 8 columns i.e.,
k1 = 4 and k2 = 8 as shown in the Figure 1.
Figure 1. 32-bit logical organization of MC’s
X0 to X31 are the data bits, C0 to C19 are the horizontal check bits, P0 to P7 are the vertical parity
bits. Hamming codes are applied for each row. Since 5 check bits are required for 8 data bits,
these are added at the end of each row.
International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013
32
The check bits are calculated as follows
C0 = ܺ଴ ⊕ ܺଵ ⊕ ܺଷ ⊕ ܺସ ⊕ ܺ଺
C1 = ܺ଴ ⊕ ܺଶ ⊕ ܺଷ ⊕ ܺହ ⊕ ܺ଺
C2 = ܺଵ ⊕ ܺଶ ⊕ ܺଷ ⊕ ܺ଻
C3 = ܺସ ⊕ ܺହ ⊕ ܺ଺ ⊕ ܺ଻
C4 = ܺ଴ ⊕ ܺଵ ⊕ ܺଶ ⊕ ܺଷ ⊕ ܺସ ⊕ ܺହ ⊕ ܺ଺ ⊕ ܺ଻
Accordingly, all the check bits are calculated for all the rows using the formula ‫ܥ‬௡௘௪= ‫ܥ‬௝ା(௖௕∗௥)
and ܺ௡௘௪= ܺ௜ା(௄ଶ∗௥) where cb is the number of check bits per row, r is the row number from 0
to 3, j is the corresponding check bit’s position in the first row and i is the corresponding data
bit’s position in the first row.
For the parity row, we use the following formula
ܲ௟ = ܺ௟ ⊕ ܺ௟ା଼ ⊕ ܺ௟ାଵ଺ ⊕ ܺ௟ାଶସ
where l is the column number from 0 to 7 for eight parity bits.
______________________________________________________________________________________________
Algorithm for Error Correction using Matrix Codes
1) Read the saved data bits X as well as the saved check bits C and parity bits P
corresponding to the data word.
2) Generate the check bits using saved data bits (C’0 to C’19).
3) Generate the syndrome bits of check bits by XORing the original check bits (C0 to C19)
with the newly generated check bits (C’0 to C’19). The syndrome bits are (SC0 to SC19).
4) Generate the SED (single error detection) and NE (no error) signals for each row by
checking if the syndrome check bit SC(r*5+4) =1, where r = 0,1,2,3 (row number). If
there is a single error in any row, the corresponding syndrome bit for that row goes high.
If none of the syndrome bits are high, then NE signal is generated.
5) Correct the detected single errors as follows
If SC0* SC1* SC2 =1, then X3 is in error;
Else if SC0* SC1* SC3 =1, then X6 is in error;
Else if SC0*SC1 =1, then X0 is in error;
Else if SC0* SC2 =1, then X1 is in error;
Else if SC1* SC2 =1, then X2 is in error;
Else if SC1* SC3 =1, then X5 is in error;
Else if SC0* SC3 =1, then X4 is in error;
Else if SC2* SC3 =1, then X7 is in error
Accordingly all single errors in each of the rows is corrected
6) Next, generate the MED (Multiple error detection) signal for each row from the
syndrome check bits as follows
If (SC(r*5) OR SC(r*5+1) OR SC(r*5+2) OR SC(r*5+3) OR SC(r*5+4)) =1 then
MEDr =1;
where MEDr is the MED signal corresponding to row r.
7) In addition, generate the parity bits (P’0 to P’7).
International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013
33
8) Generate the syndrome bits of parity bits by XORing the original parity bits (P0 to P19)
with the newly generated parity bits (P’0 to P’7). The syndrome bits are (SP0 to SP7).
9) Using the parity syndrome bits, correct the multiple errors in a row as follows
Xicorr = Xi ⊕ ( MEDr * SPl )
where SPl is the syndrome of the parity bit corresponding to the bit l.
10) Output the corrected word.
______________________________________________________________________________________________
4. PROPOSED ARCHITECTURE
In this section, we portray the block schematic employed for implementing the algorithm
described in this paper. Figure 2 shows the block diagram of Memory Architecture for Error
Detection and Correction. During the memory write operation, the encoder generates the check
bits and the parity bits from the data bits. The check bits and the parity bits are stored in the check
bit memory while the data is stored in the data memory.
During the memory read operation, the check bits and the parity bits are retrieved along with the
data bits. New check bits and parity bits are internally generated in the decoder from the data bits.
These new check bits are compared with the stored check bits by an Exclusive-OR operation to
generate the syndrome bits. To determine whether the data word is corrupted or not, the decoder
generates the error signals NE, SED and MED using the syndrome bits. The errors, if any, are
corrected and the corrected data is given out of the decoder.
Figure 2. Memory architecture for error detection and correction system
5. IMPLEMENTATION
The method described in the previous section is coded in VHDL. The design was simulated using
Xilinx ISim Simulator for both 16-bit and 32-bit data. It was tested for correct functionality by
giving various inputs through test benches. The architectures were synthesized on Spartan 6
FPGA which uses 45nm low-power copper process technology. These devices have half the
power consumption and are much faster than the previous Spartan families. The LUTs used are
dual-register 6-input LUTs. Xilinx Power analyzer tool was used to estimate the power
consumption.
6. RESULTS
Figure 3 shows the simulator outputs during a read operation. Here, ‘x’ is the 32-bit data whose
value is FFFF FFFFH in the waveforms shown. ‘er’ is the same data with 11 errors in bit
positions 0,14,20,24,25,26,27,28,29,30,31. Thus, there are 3 errors in rows 0, 1, 2 and 8 errors in
International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013
34
row 3. On the positive edge of the read signal the data is read, the check bits and parity bits are
recalculated. Upon calculation of the syndrome bits, the ‘NE’, ‘SED’ and ‘MED’ signals are
determined. The above waveforms show that NE = 0000, SED= 0111 and MED =1000. They
imply that there are 3 single errors in the first 3 rows and one multiple error in the last row.
Finally, after error correction, the corrected data realizes the value FFFF FFFFH.
Figure 3. Simulated output for Memory Read and Correct
The same algorithm was applied to a 16-bit data. A comparison is made between the results
obtained on a 16-bit data and on a 32-bit data as shown in Table 1.
Parameter 16-bit data 32-bit data
No. of errors corrected 9 11
No. of redundant bits 18 28
Area (No. of 6-input LUT’s) 86 192
Power consumed (mW) 54 54
Table 1. Comparison of results
The number of redundant bits per data bit is found to be 1.25 bits for a 16-bit data and 0.875 bits
for a 32-bit data. Thus the number of redundant bits decreases as data size increases. Furthermore,
it is seen that the major component of power dissipation was that due to leakage and is found to
be the same for any data size. The quiescent current was found to be 37mA.
A new performance metric, namely Correction efficiency, is defined to compare the efficiencies
of the two codes.
Correction efficiency =
୒୳୫ୠୣ୰ ୭୤ ୣ୰୰୭୰ୱ ୡ୭୰୰ୣୡ୲ୣୢ
୅୰ୣୟ × ୒୳୫ୠୣ୰ ୭୤ ୰ୣୢ୳୬ୢୟ୬୲ ୠ୧୲ୱ ୮ୣ୰ ୢୟ୲ୟ ୠ୧୲
× 100%
where area is defined as the number of LUT’s required.
For 16-bit data the correction efficiency is found to be 9.3% while for 32-bit data it is found to be
6.54%.
Thus it is seen that the efficiency decreases as the number of bits increases.
Correction efficiency ∝
ଵ
୒୳୫ୠୣ୰ ୭୤ ୢୟ୲ୟ ୠ୧୲ୱ
International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013
35
The simulation results are shown in the charts in Figure 4
Figure 4. Comparison of results for 16-bit data and 32-bit data
The results obtained with the proposed technique are also compared with the results obtained with
the existing technique as given in [17] for a 32-bit data as shown in Table 2.
Parameter Matrix Codes
(Existing)
Matrix Codes
(Proposed)
Number of redundant bits 28 28
Number of errors corrected 2 11
Power consumed 207.9mW 54mW
Table 2. Comparison of results with the existing technique
It can be noted that though the overhead (number of redundant bits) remains the same, there is a
considerable increase in the number of errors corrected and a significant decrease in the power
consumed. Thus the correction efficiency of the proposed technique is better than the existing
technique using Matrix codes.
7. CONCLUSION
This paper presented a new technique for multiple error detection and correction in memories,
based on the matrix codes. The proposed algorithm can detect and correct multiple errors more
efficiently than the earlier known technique. In the method proposed in [17], a maximum of three
errors could be corrected wherein two errors occur in a single row and one error in any of the
other rows. The technique proposed in this paper can correct up to eight errors in one row and a
single error in any of the other rows. The algorithm presented in [17] is simpler wherein both
single and double errors are corrected in a single step. The proposed algorithm is more efficient
than the one presented in [17]. This is due to the fact that it uses a two-step approach to correct
errors. In the first step, all single errors are corrected and in the second step, multiple errors if
present, in any row are corrected. It can correct any subset of eight errors in a row as long as the
other rows have only a single error. The only drawback is that, when multiple errors do occur in
multiple rows, only a few errors are corrected and others remain uncorrected.
International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013
36
8. FUTURE WORK
The work is being extended towards the application of our results for a 1KB memory. The results
obtained using the proposed technique for a 1KB memory will be compared with that obtained on
application of a few other ECC techniques for a 1KB memory. The comparison would be done
based on the area, correction efficiency, performance and power.
REFERENCES
[1] ITRS 2002. [Online]. http://public.itrs.net.
[2] P.Shivkumar, M.Kristler, S.W. Keckler, D. Burger and L.Alvisi, “Modeling the effect of technology
trends on the soft error rate of combinational logic”. Proc. of the Int. Conf. on Dependable systems
and Networks, pp.389-398, 2002.
[3] P. Hazucha, C. Svenson, “Impact of CMOS technology scaling on the atmospheric neutron soft error
rate”, IEEE Trans. on Nuclear Science, Vol. 47, no.6, pp. 2586-2594, Dec 2000.
[4] John Daintith. “A Dictionary of Computing”, 2004.
[5] Satoh S, Y.Tosaka, S.A.Wender, “Geometric effect of Multiple-bit Soft Errors Induced by Cosmic-
ray Neutrons on DRAMs”, Proc. of IEEE Int’l Electronic Device Meeting, pp.310-312, Jun 2000.
[6] Makhira.A, et al., “Analysis of Single-Ion Multiple-Bit Upset in High-Density DRAMS”, IEEE
Trans. On Nuclear Science, Vol. 47, No.6, Dec.2000.
[7] M. Nicolaidis, F.Vargas and B. Coutois, “Design of Built-in current sensors for concurrent checking
in radiation environments”, IEEE Trans. Nucl.Sci. vol.40, No.6, pp. 1584-1590, Dec. 1993.
[8] R. Dean Adams, “High Performance Memory Testing: Design Principles, Fault Modeling and Self-
Test”, Kluwer Academic Publishers, USA, 2003.
[9] D.K. Bhavsar “ An algorithm for row-column self-repair of RAM’s and its implementation in the
ALPHA 21264” Proc. Int. Test Conf, pp. 311-318, 1999.
[10] S.K. Lu and S.C Huang, “Built-in self-test and repair (BISTR) Techniques for Embedded RAM’s”,
Proc, Int. Workshop on Memory Technology, Design and Testing”, pp. 60-64, Aug 2004.
[11] A.Dutta, N.A. Touba, “Multiple bit upset tolerant memory using a selective cycle avoidance based
SEC-DED-DAEC code” Proc. IEEE VLSI Test Symposium (VTS), 2007, pp. 349-354.
[12] A.D Houghton “The Engineer’s Error Coding Handbook” London, UK; Chapman and Hall, 1997.
[13] Hsiao M.Y, “A class of Optimal Minimum Odd-weight-column SEC-DED codes”, IBM Journal of
Research and Development, Vol. 14, pp. 395-401, 1970.
[14] Reddy S.M., “A class of linear codes for error control in Byte-per-Package Organized Memory
Systems” IEEE trans. On Computers, Vol. C-27, pp. 455-458, May 1978.
[15] Chen C.L, “Error Correcting Codes with Byte Error Detection Capability” IEEE Trans. On
Computers, Vol. C-32, pp. 615-621, May 1983.
[16] Hsiao M.Y, Bossen D.C, Chien R.T, “Orthogonal Latin Square Codes”, IBM Journal of Research and
Development, Vol. 14, pp. 390-394, 1970.
[17] Argryides et al., “Matrix Codes for Reliable and Cost Efficient Memory Chips”, IEEE Trans. on
VLSI Systems, Vol. 19, pp. 420-428, March 2011.
International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013
37
Authors
Sunita M.S obtained her undergraduate degree from Bangalore University, received
her M.Sc (physics) degree from Bangalore University and is currently pursuing her
M.S (by Research) at VIT University, Chennai Campus, Chennai, India. She has 22
years of teaching experience and is currently working as an Associate Professor in the
Department of Electronics and Communication Engineering, P.E.S Institute of
Technology, Bangalore, India.
V.S.Kanchana Bhaaskaran obtained her under graduation degree from Institution of
Engineers (India), received her M.S. degree in Systems and Information from Birla
Institute of Technology and Sciences, Pilani and PhD degree in the field of Low
Power Design of VLSI Circuits from VIT University. She is a Fellow of the
Institution of Engineers (India) and a Fellow of the Institution of Electronics and
Telecommunication Engineers and Member of IEEE and IET. She has 34 years of
industry, research and teaching experience through serving the Department of
Employment and Training, Government of Tamil Nadu, Indian Institute of Technology Madras, Salem
Cooperative Sugar Mills’ Polytechnic College, SSN College of Engineering and currently she serves as the
Professor and Dean of the School of Electronics Engineering of VIT Chennai, India.

More Related Content

PDF
DESIGN OF SOFT VITERBI ALGORITHM DECODER ENHANCED WITH NON-TRANSMITTABLE CODE...
PDF
PERFORMANCE ANALYSIS OF PARALLEL IMPLEMENTATION OF ADVANCED ENCRYPTION STANDA...
PDF
A BIST GENERATOR CAD TOOL FOR NUMERIC INTEGRATED CIRCUITS
PDF
High Capacity Robust Medical Image Data Hiding using CDCS with Integrity Chec...
PDF
implementation of area efficient high speed eddr architecture
PDF
Compressed Image Authentication using CDMA Watermarking and EMRC6 Encryption
DOCX
ROUGH DOC.437
PDF
Error Detection and Correction in SRAM Cell Using Decimal Matrix Code
DESIGN OF SOFT VITERBI ALGORITHM DECODER ENHANCED WITH NON-TRANSMITTABLE CODE...
PERFORMANCE ANALYSIS OF PARALLEL IMPLEMENTATION OF ADVANCED ENCRYPTION STANDA...
A BIST GENERATOR CAD TOOL FOR NUMERIC INTEGRATED CIRCUITS
High Capacity Robust Medical Image Data Hiding using CDCS with Integrity Chec...
implementation of area efficient high speed eddr architecture
Compressed Image Authentication using CDMA Watermarking and EMRC6 Encryption
ROUGH DOC.437
Error Detection and Correction in SRAM Cell Using Decimal Matrix Code

Similar to MATRIX CODE BASED MULTIPLE ERROR CORRECTION TECHNIQUE FOR N-BIT MEMORY DATA (20)

PDF
Design and Implementation of DMC for Memory Reliability Enhancement
PDF
Testing nanometer memories: a review of architectures, applications, and chal...
PDF
Modifying Hamming code and using the replication method to protect memory aga...
PDF
SELF CORRECTING MEMORY DESIGN FOR FAULT FREE CODING IN PROGRESSIVE DATA STREA...
PDF
Dn4301681689
PPTX
Reliability and yield
PDF
An Efficient Approach Towards Mitigating Soft Errors Risks
PDF
International Journal of Engineering Inventions (IJEI)
PPT
ece552_23_main_memory_ecc.ppt
PDF
Reliability of ECC-based Memory Architectures with Online Self-repair Capabil...
PDF
Built-in Self Repair for SRAM Array using Redundancy
PDF
MODIFIED MARCH C- WITH CONCURRENCY IN TESTING FOR EMBEDDED MEMORY APPLICATIONS
PDF
ECC memory : Notes
PDF
High Performance Error Detection with Different Set Cyclic Codes for Memory A...
PDF
Dx35705709
PDF
Innovative Improvement of Data Storage Using Error Correction Codes
PDF
IRJET- An Efficient and Low Power Sram Testing using Clock Gating
PDF
Fpga implementation of 4 d parity based data coding technique
PDF
Ip2616541659
PDF
Memory built-in self-repair and correction for improving yield: a review
Design and Implementation of DMC for Memory Reliability Enhancement
Testing nanometer memories: a review of architectures, applications, and chal...
Modifying Hamming code and using the replication method to protect memory aga...
SELF CORRECTING MEMORY DESIGN FOR FAULT FREE CODING IN PROGRESSIVE DATA STREA...
Dn4301681689
Reliability and yield
An Efficient Approach Towards Mitigating Soft Errors Risks
International Journal of Engineering Inventions (IJEI)
ece552_23_main_memory_ecc.ppt
Reliability of ECC-based Memory Architectures with Online Self-repair Capabil...
Built-in Self Repair for SRAM Array using Redundancy
MODIFIED MARCH C- WITH CONCURRENCY IN TESTING FOR EMBEDDED MEMORY APPLICATIONS
ECC memory : Notes
High Performance Error Detection with Different Set Cyclic Codes for Memory A...
Dx35705709
Innovative Improvement of Data Storage Using Error Correction Codes
IRJET- An Efficient and Low Power Sram Testing using Clock Gating
Fpga implementation of 4 d parity based data coding technique
Ip2616541659
Memory built-in self-repair and correction for improving yield: a review
Ad

Recently uploaded (20)

PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PPTX
Internet of Things (IOT) - A guide to understanding
PPTX
additive manufacturing of ss316l using mig welding
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PDF
composite construction of structures.pdf
PDF
Digital Logic Computer Design lecture notes
DOCX
573137875-Attendance-Management-System-original
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPT
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
PPTX
OOP with Java - Java Introduction (Basics)
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
Geodesy 1.pptx...............................................
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PPTX
CH1 Production IntroductoryConcepts.pptx
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPTX
Welding lecture in detail for understanding
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Internet of Things (IOT) - A guide to understanding
additive manufacturing of ss316l using mig welding
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
composite construction of structures.pdf
Digital Logic Computer Design lecture notes
573137875-Attendance-Management-System-original
R24 SURVEYING LAB MANUAL for civil enggi
CYBER-CRIMES AND SECURITY A guide to understanding
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
OOP with Java - Java Introduction (Basics)
Embodied AI: Ushering in the Next Era of Intelligent Systems
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Geodesy 1.pptx...............................................
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
CH1 Production IntroductoryConcepts.pptx
UNIT-1 - COAL BASED THERMAL POWER PLANTS
Welding lecture in detail for understanding
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
Ad

MATRIX CODE BASED MULTIPLE ERROR CORRECTION TECHNIQUE FOR N-BIT MEMORY DATA

  • 1. International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013 DOI : 10.5121/vlsic.2013.4103 29 MATRIX CODE BASED MULTIPLE ERROR CORRECTION TECHNIQUE FOR N-BIT MEMORY DATA Sunita M.S1 and Kanchana Bhaaskaran V.S2 1 VIT University, Chennai Campus, Chennai, India PESIT, Bangalore, India sunitha@pes.edu 2 VIT University, Chennai Campus, Chennai, India kanchana.vs@vit.ac.in ABSTRACT Constant shrinkage in the device dimensions has resulted in very dense memory cells. The probability of occurrence of multiple bit errors is much higher in very dense memory cells. Conventional Error Correcting Codes (ECC) cannot correct multiple errors in memories even though many of these are capable of detecting multiple errors. This paper presents a novel decoding algorithm to detect and correct multiple errors in memory based on Matrix Codes. The algorithm used is such that it can correct a maximum of eleven errors in a 32-bit data and a maximum of nine errors in a 16-bit data. The proposed method can be used to improve the memory yield in presence of multiple-bit upsets. It can be applied for correcting burst errors wherein, a continuous sequence of data bits are affected when high energetic particles from external radiation strike memory, and cause soft errors. The proposed technique performs better than the previously known technique of error detection and correction using Matrix Codes. KEYWORDS Memory testing, Error correction codes, Matrix codes, multiple error detection, multiple error correction. 1. INTRODUCTION Embedded memories play an important role in the semiconductor market because the system-on- chip market is booming and almost every system chip contains some type of embedded memory. There is a prediction that embedded memories will dominate more than 90% of the system chip area in the next few years. High-density, low-voltage levels, small feature size and small noise margins make the memory chips increasingly susceptible to faults or soft errors [1]. Errors introduced due to the external radiation or electrical noise rather than the design or manufacturing defects are known as soft errors. They are caused by high energy neutrons and alpha particles hitting the silicon bulk resulting in the production of large number of electron-hole pairs. The accumulated charge may be sufficient to flip the value stored in a cell thus causing bit inversion, resulting in soft error [2]. Hence the effects of radiation are bit-flips occurring in the information stored in memory elements. Due to the relentless shrinkage in the device dimensions, the particles that were once considered negligible are now proving to be significant enough to cause upsets [3]. Such errors are identified as soft errors since, although they corrupt the value stored in the cell, they do not permanently damage the hardware. Soft errors can be either single-event upset (SEU), where an ionizing particle affects a single bit or multiple-bit upset (MBU), where more than one bit is upset during a single measurement. Burst error can be defined as an error pattern, generally in a binary signal, that consists of known
  • 2. International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013 30 positions where the digit is in error (first and last) with the intervening digits possibly in error and possibly not. By implication, the digits before the first error in the block and after the last error in the block are correct [4]. The burst errors occur during short intervals of time and hence corrupt a set of adjacent bits in that duration. Depending on the underlying technology and the incident particle, several types of multiple-bit errors are possible [5][6]. It has been shown that incident neutron particles can react with the die contaminant and generate secondary particles with enough energy to create multiple errors. Testing embedded memory is more difficult than testing independent memory, in general, unless built-in self diagnosis techniques are used. Some of the common approaches to protect memories are 1) Built-in Current sensors (BICS) that can detect occurrence of errors by detecting changes in current. The sensors are placed in the columns of the memory blocks and they detect unexpected current variations on each of the memory bit positions [7]. The BICS consumes very little power during testing and no power when testing is finished. Furthermore, it can screen out defects that escape other test methods and are very effective in defect diagnosis. However, it has the drawback that it can only detect errors and it does not have error correction capability. 2) Built-in Self-Test (BIST), which uses various algorithms such as the March pattern, pseudo-random patterns and MATS patterns to test the functionality of the RAMs [8]. They not only detect the presence of faults, but also specify their locations for repair. Although, very effective in functional testing of the RAMs and subsequent error detection, they do not have error correction capability. 3) Built-in Self Repair (BISR)/Built-in Redundancy Analysis (BIRA) is an extension of BIST. It uses the Replacement Algorithm, wherein the cells of a memory identified as defective by process of BIST are corrected by replacing the corresponding row/column by spare rows/columns [9][10]. Though easier to repair the faulty cells, this approach is inefficient, due to the fact that more redundant rows and columns are required to achieve sufficient chip yield. 4) Design-for-test (DFT) techniques are aids to enable detection of defects. A DFT technique involves modifying a memory design to make it testable. 5) Interleaving in the physical arrangement of memory cells, such that the cells belonging to the same logical word are separated. This can prevent MBUs since the physically adjacent bits that are affected by MBU belong to different words. This causes single errors in different words, which can be easily detected and corrected. However, interleaving can have impact on floor-planning, the access time and power consumption [11]. 6) Error Correcting Codes: Most common approach to maintain a good level of reliability. ECC techniques are well understood and relatively inexpensive in terms of the extra circuitry required. The rest of the paper is organized as follows. Section 2 provides a brief survey on the various error-correcting codes used with memory. Section 3 describes the Matrix Codes and the algorithm used for error detection and correction. The proposed architecture is explained in Section 4. The implementation method is explained in Section 5. Section 6 presents the results and discussion. Section 7 concludes the paper. Finally Section 8 provides an insight into the future work which is in progress.
  • 3. International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013 31 2. SURVEY OF VARIOUS ECC SCHEMES USED FOR MEMORY There are various Error Correcting Codes used for error detection and correction in memory. Hamming Codes are largely used to correct SEUs in memory due to their ability to correct single errors with reduced area and performance overhead [12]. Though excellent for correction of single errors in a data word, they cannot correct double bit errors caused by single event upset. An extension of the basic SEC-DED Hamming Code has been proposed to form a special class of codes known as Hsiao Codes to improve the speed, cost and reliability of the decoding logic [13]. One more class of SEC-DED codes known as Single-error-correcting, Double-error-detecting Single-byte-error-detecting SEC-DED-SBD codes were proposed to detect any number of errors affecting a single byte. These codes are more suitable than the conventional SEC-DED codes for protecting the byte-organized memories [14][15]. Though they operate with lesser overhead and are good for multiple error detection, they cannot correct multiple errors. There are additional codes such as the single-byte-error-correcting, double-byte-error-detecting (SBC-DBD) codes, double-error-correcting, triple-error-detecting (DEC-TED) codes that can correct multiple errors as discussed in [9]. The Single-error-correcting, Double-error-detecting and Double-adjacent-error-correcting (SEC-DED-DAEC) code provides a low cost ECC methodology to correct adjacent errors as proposed in [11]. The only drawback with this code is the possibility of miscorrection for a small subset of multiple errors. The Reed-Solomon (RS) code and Bose-Chaudhuri-Hocquenghem (BCH) Codes are capable of detecting and correcting multiple bytes of errors with low overhead. However, they work at the block level and normally are applied to multiple words at a time [11]. Hsiao M.Y et. al. [16] also proposed a new class of multiple error correcting codes called Orthogonal Latin Square Code, which belong to the class of one-step-decodable majority code, and which can be decoded at an exceptionally high speed. The matrix codes combine the Hamming and parity codes to improve reliability and yield of memory chips even in the presence of high defects and multiple bit upsets [17]. This paper presents a new decoding algorithm to detect and correct multiple errors using Matrix Codes. 3. MATRIX CODES The n-bit data word is stored in a matrix format such that n = k1 x k2, where k1and k2 represent the number of rows and columns respectively. For each of the k1 rows, check bits are added and another k2 bits are added as vertical parity bits. The technique is explained by considering a data word length of 32 bits. The 32-bit word is stored in a 4x8 matrix with 4 rows and 8 columns i.e., k1 = 4 and k2 = 8 as shown in the Figure 1. Figure 1. 32-bit logical organization of MC’s X0 to X31 are the data bits, C0 to C19 are the horizontal check bits, P0 to P7 are the vertical parity bits. Hamming codes are applied for each row. Since 5 check bits are required for 8 data bits, these are added at the end of each row.
  • 4. International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013 32 The check bits are calculated as follows C0 = ܺ଴ ⊕ ܺଵ ⊕ ܺଷ ⊕ ܺସ ⊕ ܺ଺ C1 = ܺ଴ ⊕ ܺଶ ⊕ ܺଷ ⊕ ܺହ ⊕ ܺ଺ C2 = ܺଵ ⊕ ܺଶ ⊕ ܺଷ ⊕ ܺ଻ C3 = ܺସ ⊕ ܺହ ⊕ ܺ଺ ⊕ ܺ଻ C4 = ܺ଴ ⊕ ܺଵ ⊕ ܺଶ ⊕ ܺଷ ⊕ ܺସ ⊕ ܺହ ⊕ ܺ଺ ⊕ ܺ଻ Accordingly, all the check bits are calculated for all the rows using the formula ‫ܥ‬௡௘௪= ‫ܥ‬௝ା(௖௕∗௥) and ܺ௡௘௪= ܺ௜ା(௄ଶ∗௥) where cb is the number of check bits per row, r is the row number from 0 to 3, j is the corresponding check bit’s position in the first row and i is the corresponding data bit’s position in the first row. For the parity row, we use the following formula ܲ௟ = ܺ௟ ⊕ ܺ௟ା଼ ⊕ ܺ௟ାଵ଺ ⊕ ܺ௟ାଶସ where l is the column number from 0 to 7 for eight parity bits. ______________________________________________________________________________________________ Algorithm for Error Correction using Matrix Codes 1) Read the saved data bits X as well as the saved check bits C and parity bits P corresponding to the data word. 2) Generate the check bits using saved data bits (C’0 to C’19). 3) Generate the syndrome bits of check bits by XORing the original check bits (C0 to C19) with the newly generated check bits (C’0 to C’19). The syndrome bits are (SC0 to SC19). 4) Generate the SED (single error detection) and NE (no error) signals for each row by checking if the syndrome check bit SC(r*5+4) =1, where r = 0,1,2,3 (row number). If there is a single error in any row, the corresponding syndrome bit for that row goes high. If none of the syndrome bits are high, then NE signal is generated. 5) Correct the detected single errors as follows If SC0* SC1* SC2 =1, then X3 is in error; Else if SC0* SC1* SC3 =1, then X6 is in error; Else if SC0*SC1 =1, then X0 is in error; Else if SC0* SC2 =1, then X1 is in error; Else if SC1* SC2 =1, then X2 is in error; Else if SC1* SC3 =1, then X5 is in error; Else if SC0* SC3 =1, then X4 is in error; Else if SC2* SC3 =1, then X7 is in error Accordingly all single errors in each of the rows is corrected 6) Next, generate the MED (Multiple error detection) signal for each row from the syndrome check bits as follows If (SC(r*5) OR SC(r*5+1) OR SC(r*5+2) OR SC(r*5+3) OR SC(r*5+4)) =1 then MEDr =1; where MEDr is the MED signal corresponding to row r. 7) In addition, generate the parity bits (P’0 to P’7).
  • 5. International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013 33 8) Generate the syndrome bits of parity bits by XORing the original parity bits (P0 to P19) with the newly generated parity bits (P’0 to P’7). The syndrome bits are (SP0 to SP7). 9) Using the parity syndrome bits, correct the multiple errors in a row as follows Xicorr = Xi ⊕ ( MEDr * SPl ) where SPl is the syndrome of the parity bit corresponding to the bit l. 10) Output the corrected word. ______________________________________________________________________________________________ 4. PROPOSED ARCHITECTURE In this section, we portray the block schematic employed for implementing the algorithm described in this paper. Figure 2 shows the block diagram of Memory Architecture for Error Detection and Correction. During the memory write operation, the encoder generates the check bits and the parity bits from the data bits. The check bits and the parity bits are stored in the check bit memory while the data is stored in the data memory. During the memory read operation, the check bits and the parity bits are retrieved along with the data bits. New check bits and parity bits are internally generated in the decoder from the data bits. These new check bits are compared with the stored check bits by an Exclusive-OR operation to generate the syndrome bits. To determine whether the data word is corrupted or not, the decoder generates the error signals NE, SED and MED using the syndrome bits. The errors, if any, are corrected and the corrected data is given out of the decoder. Figure 2. Memory architecture for error detection and correction system 5. IMPLEMENTATION The method described in the previous section is coded in VHDL. The design was simulated using Xilinx ISim Simulator for both 16-bit and 32-bit data. It was tested for correct functionality by giving various inputs through test benches. The architectures were synthesized on Spartan 6 FPGA which uses 45nm low-power copper process technology. These devices have half the power consumption and are much faster than the previous Spartan families. The LUTs used are dual-register 6-input LUTs. Xilinx Power analyzer tool was used to estimate the power consumption. 6. RESULTS Figure 3 shows the simulator outputs during a read operation. Here, ‘x’ is the 32-bit data whose value is FFFF FFFFH in the waveforms shown. ‘er’ is the same data with 11 errors in bit positions 0,14,20,24,25,26,27,28,29,30,31. Thus, there are 3 errors in rows 0, 1, 2 and 8 errors in
  • 6. International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013 34 row 3. On the positive edge of the read signal the data is read, the check bits and parity bits are recalculated. Upon calculation of the syndrome bits, the ‘NE’, ‘SED’ and ‘MED’ signals are determined. The above waveforms show that NE = 0000, SED= 0111 and MED =1000. They imply that there are 3 single errors in the first 3 rows and one multiple error in the last row. Finally, after error correction, the corrected data realizes the value FFFF FFFFH. Figure 3. Simulated output for Memory Read and Correct The same algorithm was applied to a 16-bit data. A comparison is made between the results obtained on a 16-bit data and on a 32-bit data as shown in Table 1. Parameter 16-bit data 32-bit data No. of errors corrected 9 11 No. of redundant bits 18 28 Area (No. of 6-input LUT’s) 86 192 Power consumed (mW) 54 54 Table 1. Comparison of results The number of redundant bits per data bit is found to be 1.25 bits for a 16-bit data and 0.875 bits for a 32-bit data. Thus the number of redundant bits decreases as data size increases. Furthermore, it is seen that the major component of power dissipation was that due to leakage and is found to be the same for any data size. The quiescent current was found to be 37mA. A new performance metric, namely Correction efficiency, is defined to compare the efficiencies of the two codes. Correction efficiency = ୒୳୫ୠୣ୰ ୭୤ ୣ୰୰୭୰ୱ ୡ୭୰୰ୣୡ୲ୣୢ ୅୰ୣୟ × ୒୳୫ୠୣ୰ ୭୤ ୰ୣୢ୳୬ୢୟ୬୲ ୠ୧୲ୱ ୮ୣ୰ ୢୟ୲ୟ ୠ୧୲ × 100% where area is defined as the number of LUT’s required. For 16-bit data the correction efficiency is found to be 9.3% while for 32-bit data it is found to be 6.54%. Thus it is seen that the efficiency decreases as the number of bits increases. Correction efficiency ∝ ଵ ୒୳୫ୠୣ୰ ୭୤ ୢୟ୲ୟ ୠ୧୲ୱ
  • 7. International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013 35 The simulation results are shown in the charts in Figure 4 Figure 4. Comparison of results for 16-bit data and 32-bit data The results obtained with the proposed technique are also compared with the results obtained with the existing technique as given in [17] for a 32-bit data as shown in Table 2. Parameter Matrix Codes (Existing) Matrix Codes (Proposed) Number of redundant bits 28 28 Number of errors corrected 2 11 Power consumed 207.9mW 54mW Table 2. Comparison of results with the existing technique It can be noted that though the overhead (number of redundant bits) remains the same, there is a considerable increase in the number of errors corrected and a significant decrease in the power consumed. Thus the correction efficiency of the proposed technique is better than the existing technique using Matrix codes. 7. CONCLUSION This paper presented a new technique for multiple error detection and correction in memories, based on the matrix codes. The proposed algorithm can detect and correct multiple errors more efficiently than the earlier known technique. In the method proposed in [17], a maximum of three errors could be corrected wherein two errors occur in a single row and one error in any of the other rows. The technique proposed in this paper can correct up to eight errors in one row and a single error in any of the other rows. The algorithm presented in [17] is simpler wherein both single and double errors are corrected in a single step. The proposed algorithm is more efficient than the one presented in [17]. This is due to the fact that it uses a two-step approach to correct errors. In the first step, all single errors are corrected and in the second step, multiple errors if present, in any row are corrected. It can correct any subset of eight errors in a row as long as the other rows have only a single error. The only drawback is that, when multiple errors do occur in multiple rows, only a few errors are corrected and others remain uncorrected.
  • 8. International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013 36 8. FUTURE WORK The work is being extended towards the application of our results for a 1KB memory. The results obtained using the proposed technique for a 1KB memory will be compared with that obtained on application of a few other ECC techniques for a 1KB memory. The comparison would be done based on the area, correction efficiency, performance and power. REFERENCES [1] ITRS 2002. [Online]. http://public.itrs.net. [2] P.Shivkumar, M.Kristler, S.W. Keckler, D. Burger and L.Alvisi, “Modeling the effect of technology trends on the soft error rate of combinational logic”. Proc. of the Int. Conf. on Dependable systems and Networks, pp.389-398, 2002. [3] P. Hazucha, C. Svenson, “Impact of CMOS technology scaling on the atmospheric neutron soft error rate”, IEEE Trans. on Nuclear Science, Vol. 47, no.6, pp. 2586-2594, Dec 2000. [4] John Daintith. “A Dictionary of Computing”, 2004. [5] Satoh S, Y.Tosaka, S.A.Wender, “Geometric effect of Multiple-bit Soft Errors Induced by Cosmic- ray Neutrons on DRAMs”, Proc. of IEEE Int’l Electronic Device Meeting, pp.310-312, Jun 2000. [6] Makhira.A, et al., “Analysis of Single-Ion Multiple-Bit Upset in High-Density DRAMS”, IEEE Trans. On Nuclear Science, Vol. 47, No.6, Dec.2000. [7] M. Nicolaidis, F.Vargas and B. Coutois, “Design of Built-in current sensors for concurrent checking in radiation environments”, IEEE Trans. Nucl.Sci. vol.40, No.6, pp. 1584-1590, Dec. 1993. [8] R. Dean Adams, “High Performance Memory Testing: Design Principles, Fault Modeling and Self- Test”, Kluwer Academic Publishers, USA, 2003. [9] D.K. Bhavsar “ An algorithm for row-column self-repair of RAM’s and its implementation in the ALPHA 21264” Proc. Int. Test Conf, pp. 311-318, 1999. [10] S.K. Lu and S.C Huang, “Built-in self-test and repair (BISTR) Techniques for Embedded RAM’s”, Proc, Int. Workshop on Memory Technology, Design and Testing”, pp. 60-64, Aug 2004. [11] A.Dutta, N.A. Touba, “Multiple bit upset tolerant memory using a selective cycle avoidance based SEC-DED-DAEC code” Proc. IEEE VLSI Test Symposium (VTS), 2007, pp. 349-354. [12] A.D Houghton “The Engineer’s Error Coding Handbook” London, UK; Chapman and Hall, 1997. [13] Hsiao M.Y, “A class of Optimal Minimum Odd-weight-column SEC-DED codes”, IBM Journal of Research and Development, Vol. 14, pp. 395-401, 1970. [14] Reddy S.M., “A class of linear codes for error control in Byte-per-Package Organized Memory Systems” IEEE trans. On Computers, Vol. C-27, pp. 455-458, May 1978. [15] Chen C.L, “Error Correcting Codes with Byte Error Detection Capability” IEEE Trans. On Computers, Vol. C-32, pp. 615-621, May 1983. [16] Hsiao M.Y, Bossen D.C, Chien R.T, “Orthogonal Latin Square Codes”, IBM Journal of Research and Development, Vol. 14, pp. 390-394, 1970. [17] Argryides et al., “Matrix Codes for Reliable and Cost Efficient Memory Chips”, IEEE Trans. on VLSI Systems, Vol. 19, pp. 420-428, March 2011.
  • 9. International Journal of VLSI design & Communication Systems (VLSICS) Vol.4, No.1, February 2013 37 Authors Sunita M.S obtained her undergraduate degree from Bangalore University, received her M.Sc (physics) degree from Bangalore University and is currently pursuing her M.S (by Research) at VIT University, Chennai Campus, Chennai, India. She has 22 years of teaching experience and is currently working as an Associate Professor in the Department of Electronics and Communication Engineering, P.E.S Institute of Technology, Bangalore, India. V.S.Kanchana Bhaaskaran obtained her under graduation degree from Institution of Engineers (India), received her M.S. degree in Systems and Information from Birla Institute of Technology and Sciences, Pilani and PhD degree in the field of Low Power Design of VLSI Circuits from VIT University. She is a Fellow of the Institution of Engineers (India) and a Fellow of the Institution of Electronics and Telecommunication Engineers and Member of IEEE and IET. She has 34 years of industry, research and teaching experience through serving the Department of Employment and Training, Government of Tamil Nadu, Indian Institute of Technology Madras, Salem Cooperative Sugar Mills’ Polytechnic College, SSN College of Engineering and currently she serves as the Professor and Dean of the School of Electronics Engineering of VIT Chennai, India.