SlideShare a Scribd company logo
1
OPTIMIZING MATERIAL REMOVAL RATE
AND SURFACE ROUGHNESS IN TURNING
OPERATION USING TAGUCHI TECHNIQUE
ADVANCED DESIGN PROJECT II
R A H U L R O Y
B M E I V B 2
R O L L N O . 1 0 1 4 1 1 2 0 1 0 5 3
Prepared by:
. . . .
2
INDEX	
	
INTRODUCTION___________________________________________________________________________3
LITERATURE SURVEY_____________________________________________________________________6
PROCESS OPTIMIZATION____________________________________________________________6
TAGUCHI’S METHOD________________________________________________________________6
EXPERIMENTAL METHODS________________________________________________________________8
EXPERIMENTAL SETUP______________________________________________________________9
DESIGN OF EXPERIMENTS___________________________________________________________13
SURFACE ROUGHNESS ANALYSIS__________________________________________________________14
MATERIAL REMOVAL RATE ANALYSIS____________________________________________________15
CONCLUSION______________________________________________________________________________16
REFERENCES______________________________________________________________________________16
3
INTRODUCTION	
The objective of this advanced design project is to obtain an optimal setting of turning parameters (Cutting
speed, Feed and Depth of Cut), which results in an optimal value of material removal rate (MRR) while
machining a cylindrical bar made of mild steel. In our study, an attempt has been made to generate a model to
predict material removal rate using Regression Technique. Also an attempt has been made to optimize the
process parameters using Taguchi Technique. A three level orthogonal array L9 was selected to satisfy the
minimum number of experiment conditions for the factors and levels presented in this project.
From past so many years it has been recognized that conditions during machining such as Cutting speed, Feed
and Depth of Cut (DOC) should be selected to optimize the economics of machining operations. Manufacturing
industries in developing countries suffer from a major drawback of not running the machine at their optimal
operating conditions. Machining industries are dependent on the experience and skills of the machine tool
operators for optimal selection of cutting conditions. In machining industries the practice of using handbook
based conservative cutting conditions are in progress at the process planning level. The disadvantage of this
unscientific practice is the decrease in productivity due to sub optimal use of machining capability. The
literature survey has revealed that several researchers attempted to calculate the optimal cutting conditions in
turning operations. Armarego and Brown used the concept of maxima / minima of differential calculus to
optimize machining variable in turning operation. Brewer and Rueda have developed different monograms,
which assist in the selection of optimum conditions. Some of the other techniques, which have been used to
optimize the machining parameters, include goal programming and geometrical programming. Now a day more
attention is given to MRR and Material removal rate of the product in the industries. Material removal rate is the
most important criteria in determining the machinability of the material. Material removal rate and material
removal rate are the major factors needed to predict the machining performances of any machining operation.
Most of the material removal rate prediction models are empirical and they are generally based on experiments
conducted in the laboratory. Also it is difficult in practice, to keep all factors under control as required to obtain
the reproducible results. Optimization of machining parameters increases the utility for machining economics
and also increases the product quality to greater extent.
Machining is a subtractive manufacturing process that is used for removing material controllably from a piece of
raw material. The objective of this process is to produce final product with proper shape and controlled
dimension from the workpiece (raw material). In order to manufacture product with accurate dimension, proper
tolerance and acceptable surface finish, several things need to be considered during machining such as, tool
material, workpiece material, type of machining process, etc. Relative motion between the tool and workpiece is
the primary reason behind all machining process that causes material to be removed from the workpiece,
however, tool shape and tool penetration into the workpiece are also responsible for this. Relative motion
between the tool and workpiece is achieved by the combined action of the primary motion called ‘cutting speed’
and secondary motion called ‘feed speed.’ Modern days’ machining process is mostly carried out by computer
numerical control (CNC) machine in which software based automatic control system carries out the movement
and operation of the machine tools.
Turning is a form of machining, a material removal process, which is used to create rotational parts by cutting
away unwanted material. The turning process requires a turning machine or lathe, workpiece, fixture, and
cutting tool. The workpiece is a piece of pre-shaped material that is secured to the fixture, which itself is
attached to the turning machine, and allowed to rotate at high speeds. The cutter is typically a single-point
cutting tool that is also secured in the machine, although some operations make use of multi-point tools. The
cutting tool feeds into the rotating workpiece and cuts away material in the form of small chips to create the
desired shape. Turning is used to produce rotational, typically axi-symmetric, parts that have many features,
such as holes, grooves, threads, tapers, various diameter steps, and even contoured surfaces. Parts that are
fabricated completely through turning often include components that are used in limited quantities, perhaps for
prototypes, such as custom designed shafts and fasteners. Turning is also commonly used as a secondary process
to add or refine features on parts that were manufactured using a different process. Due to the high tolerances
and surface finishes that turning can offer, it is ideal for adding precision rotational features to a part whose
basic shape has already been formed.
Cutting parameters
In turning, the speed and motion of the cutting tool is specified through several parameters. These parameters
are selected for each operation based upon the workpiece material, tool material, tool size, and more.
• Cutting feed - The distance that the cutting tool or workpiece advances during one revolution of the
spindle, measured in inches per revolution (IPR). In some operations the tool feeds into the workpiece
and in others the workpiece feeds into the tool. For a multi-point tool, the cutting feed is also equal to the
4
feed per tooth, measured in inches per tooth (IPT), multiplied by the number of teeth on the cutting tool.
• Cutting speed - The speed of the workpiece surface relative to the edge of the cutting tool during a cut,
measured in surface feet per minute (SFM).
• Spindle speed - The rotational speed of the spindle and the workpiece in revolutions per minute (RPM).
The spindle speed is equal to the cutting speed divided by the circumference of the workpiece where the
cut is being made. In order to maintain a constant cutting speed, the spindle speed must vary based on the
diameter of the cut. If the spindle speed is held constant, then the cutting speed will vary.
• Feed rate - The speed of the cutting tool's movement relative to the workpiece as the tool makes a cut.
The feed rate is measured in inches per minute (IPM) and is the product of the cutting feed (IPR) and the
spindle speed (RPM).
• Axial depth of cut - The depth of the tool along the axis of the workpiece as it makes a cut, as in a facing
operation. A large axial depth of cut will require a low feed rate, or else it will result in a high load on the
tool and reduce the tool life. Therefore, a feature is typically machined in several passes as the tool moves
to the specified axial depth of cut for each pass.
In simple terms, a lathe machine removes material from a workpiece with the goal of achieving the preferred
size and shape. Ultimately, the machine holds a wood or metal workpiece so that through grooving, chamfering,
facing, turning, forming, and so on, the product comes out to the customer’s specifications. As you can imagine,
there are many reasons to use a lathe machine. For example, furniture manufacturers use this type of machine to
hold pieces of wood in place. Throughout the lathing process, the once block of wood transforms into a
gorgeous finished table leg. The working of a lathe machine also involves metals such as aluminum. In this case,
the machine might hold metal that becomes a shaft for the aerospace or automotive industry. You should also
note that there are different types of lathe machines, each used for a particular purpose. Some examples of these
include a multi-spindle lathe, CNC lathe, a combination lathe, a turret lathe, and a center lathe. While each
machine performs in much the same way, there are differences in parts and uses.
Machining is a term used to describe a variety of material removal processes in which a cutting tool removes
unwanted material from a workpiece to produce the desired shape. The workpiece is typically cut from a larger
piece of stock, which is available in a variety of standard shapes, such as flat sheets, solid bars, hollow tubes,
and shaped beams. Machining can also be performed on an existing part, such as a casting or forging. Parts that
are machined from a pre-shaped workpiece are typically cubic or cylindrical in their overall shape, but their
individual features may be quite complex. Machining can be used to create a variety of features including holes,
slots, pockets, flat surfaces, and even complex surface contours. Also, while machined parts are typically metal,
almost all materials can be machined, including metals, plastics, composites, and wood. For these reasons,
machining is often considered the most common and versatile of all manufacturing processes. As a material
removal process, machining is inherently not the most economical choice for a primary manufacturing process.
Material, which has been paid for, is cut away and discarded to achieve the final part. Also, despite the low
setup and tooling costs, long machining times may be required and therefore be cost prohibitive for large
quantities. As a result, machining is most often used for limited quantities as in the fabrication of prototypes or
custom tooling for other manufacturing processes. Machining is also very commonly used as a secondary
process, where minimal material is removed and the cycle time is short. Due to the high tolerance and surface
finishes that machining offers, it is often used to add or refine precision features to an existing part or smooth a
surface to a fine finish.
As mentioned above, machining includes a variety of processes that each removes material from an initial
workpiece or part. The most common material removal processes, sometimes referred to as conventional or
traditional machining, are those that mechanically cut away small chips of material using a sharp tool. Non-
conventional machining processes may use chemical or thermal means of removing material. Conventional
machining processes are often placed in three categories - single point cutting, multi-point cutting, and abrasive
machining. Each process in these categories is uniquely defined by the type of cutting tool used and the general
motion of that tool and the workpiece. However, within a given process a variety of operations can be
performed, each utilizing a specific type of tool and cutting motion. The machining of a part will typically
require a variety of operations that are performed in a carefully planned sequence to create the desired features.
Turning is used to produce rotational, typically axi-symmetric, parts that have many features, such as holes,
grooves, threads, tapers, various diameter steps, and even contoured surfaces. Parts that are fabricated
completely through turning often include components that are used in limited quantities, perhaps for prototypes,
such as custom designed shafts and fasteners. Turning is also commonly used as a secondary process to add or
refine features on parts that were manufactured using a different process. Due to the high tolerances and surface
finishes that turning can offer, it is ideal for adding precision rotational features to a part whose basic shape has
already been formed. Turning is a form of machining, a material removal process, which is used to create
rotational parts by cutting away unwanted material. The turning process requires a turning machine or lathe,
workpiece, fixture, and cutting tool. The workpiece is a piece of pre-shaped material that is secured to the
fixture, which itself is attached to the turning machine, and allowed to rotate at high speeds. The cutter is
5
typically a single-point cutting tool that is also secured in the machine, although some operations make use of
multi-point tools. The cutting tool feeds into the rotating workpiece and cuts away material in the form of small
chips to create the desired shape.
Main Parts of the Lathe Machine
The following are the primary components associated with a lathe machine.
Apron – Consists of all mechanisms that control and move the carriage
Bed – Main body of the lathe machine onto which all primary components bolt
Carriage – Holds and moves the tool post vertically and horizontally on the bed
Chips Pan – Carries all chips removed from the workpiece
Chuck – Holds the workspace and is bolted on the spindle that rotates both the chuck and workpiece
Guide Ways – Handles the movement of the carriage and tailstock on the bed
Head Stock – Services as a holding device for the spindle, gear chain, driving pulley, and more
Lead Screw – Automatically moves the carriage during the thread cutting process
Legs – Carries the load of the machine
Speed Controller – Controls the spindle’s speed
Spindle – Holds and rotates the chuck
Tool Post – Holds the tool at a precise position
Tail Stock – Supports the job as needed and is used for drilling operations
Taguchi has proposed off line for quality improvement in place of an attempt to inspect quality in the product on
the product line. He observed that no amount of an inspection can put quality back into the product but it merely
treats a symptom. Taguchi has recommended three-stage process to achieve the desirable product quality by
design. They are 1). System Design 2). Parameter Design and 3). Tolerance Design. It produces the best
performance of the product or process under study. The optimal condition is selected so that influence of noise
factors (uncontrollable factors) causes minimum variation to study performances. The orthogonal arrays,
variance and signal to noise analysis are the essential tools of parameter design. Tolerance design is used to fine-
tune the results of parameter design by tightening the tolerance of the parameter with the significant influence
on the product.
8-STEPS IN TAGUCHI METHODOLOGY:
Step-1: IDENTIFY THE MAIN FUNCTION, SIDE EFFECTS, AND FAILURE MODE
Step-2: IDENTIFY THE NOISE FACTORS
Step-3: IDENTIFY THE OBJECTIVE FUNCTION TO BE OPTIMIZED
Step-4: IDENTIFY THE CONTROL FACTORS AND THEIR LEVELS
Step-5: SELECT THE ORTHOGONAL ARRAY MATRIX EXPERIMENT
Step-6: CONDUCT THE MATRIX EXPERIMENT
Step-7: ANALYZE THE DATA, PREDICT THE OPTIMUM LEVELS AND PERFORMANCE
Step-8: PERFORM THE VERIFICATION EXPERIMENT AND PLAN THE FUTURE ACTION
In Taguchi Method, the word "optimization" implies "determination of BEST levels of control factors". In turn,
the BEST levels of control factors are those that maximize the Signal-to-Noise ratios. The Signal-to-Noise ratios
are log functions of desired output characteristics. The experiments, that are conducted to determine the BEST
levels, are based on "Orthogonal Arrays", are balanced with respect to all control factors and yet are minimum
in number. This in turn implies that the resources (materials and time) required for the experiments are also
minimum. Taguchi method divides all problems into 2 categories - STATIC or DYNAMIC. While the Dynamic
problems have a SIGNAL factor, the Static problems do not have any signal factor. In Static problems, using 3
Signal-to-Noise ratios - smaller the better, LARGER-THE-BETTER and nominal-the-best achieve the
optimization. In Dynamic problems, the optimization is achieved by using 2 Signal-to-Noise ratios - Slope and
Linearity.
Taguchi Method is a process/product optimization method that is based on 8-steps of planning, conducting and
evaluating results of matrix experiments to determine the best levels of control factors. The primary goal is to
keep the variance in the output very low even in the presence of noise inputs. Thus, the processes/products are
made ROBUST against all variations. In this paper Taguchi DOE (Design of Experiments) approach is used to
analyze the effect of turning process parameters that is Cutting Speed, Feed and Depth of Cut on material
removal rate while machining on mild steel and to obtain an optimal setting of these parameters that may result
in optimizing material removal rate.
6
LITERATURE	SURVEY	
Process Optimization
Process optimization is the discipline of adjusting a process to optimize some specified set of parameters
without violating some constraint. The most common goals are minimizing cost, maximizing throughout, and/or
efficiency. This is one of the major quantitative tools in industrial decision-making. When optimizing a process,
the goal is to maximize one or more of the process specifications, while keeping all others within their
constraints. Traditionally, the selection of cutting conditions for metal cutting is left to the machine operator. In
such cases, the experience of the operator plays a major role, but even for a skilled operator it is very difficult to
attain the optimum values each time. Machining parameters in metal turning are cutting speed, feed rate and
depth of cut. The setting of these parameters determines the quality characteristics of turned parts. Following the
pioneering work of Taylor (1907) and his famous tool life equation, different analytical and experimental
approaches for the optimization of machining parameters have been investigated. Gilbert (1950) studied the
optimization of machining parameters in turning with respect to maximum production rate and minimum
production cost as criteria. Armarego & Brown (1969) investigated unconstrained machine-parameter
optimization using differential calculus. Brewer & Rueda (1963) carried out simplified optimum analysis for
non-ferrous materials. For cast iron (CI) and steels, they employed the criterion of reducing the machining cost
to a minimum. A number of nomograms were worked out to facilitate the practical determination of the most
economic machining conditions. They pointed out that the more- difficult-to-machine materials have a restricted
range of parameters over which machining can be carried out and thus any attempt at optimizing their costs is
artificial.
Brewer (1966) suggested the use of Lagrangian multipliers for optimization of the con- strained problem of unit
cost, with cutting power as the main constraint. Bhattacharya et al (1970) optimized the unit cost for turning,
subject to the constraints of surface roughness and cutting power by the use of Lagrange’s method. Walvekar &
Lambert (1970) discussed the use of geometric programming to selection of machining variables. They
optimized cut- ting speed and feed rate to yield minimum production cost. Petropoulos (1973) investigated
optimal selection of machining rate variables, viz. cutting speed and feed rate, by geometric programming. A
constrained unit cost problem in turning was optimized by machining SAE 1045 steel with a cemented carbide
tool of ISO P-10 grade. Sundaram (1978) applied a goal-programming technique in metal cutting for selecting
levels of machining parameters in a fine turning operation on AISI 4140 steel using cemented tungsten carbide
tools. Ermer & Kromodiharajo (1981) developed a multi-step mathematical. They concluded that in some cases
with certain constant total depths of cut, multi-pass machining was more economical than single-pass
machining, if depth of cut for each pass was properly allocated. They used high speed steel (HSS) cutting tools
to machine carbon steel. Hinduja et al (1985) described a procedure to calculate the optimum cutting conditions
for turning operations with minimum cost or maximum production rate as the objective function. For a given
combination of tool and work material, the search for the optimum was confined to a feed rate versus depth-of-
cut plane defined by the chip-breaking constraint. Some of the other constraints considered include power
available, work holding, surface finish and dimensional accuracy. Tsai (1986) studied the relationship between
the multi-pass machining and single-pass machining. He presented the concept of a break-even point, i.e. there
is always a point, a certain value of depth of cut, at which single-pass and double-pass machining are equally
effective. When the depth of cut drops below the break-even point, the single-pass is more economical than the
double-pass, and when the depth of cut rises above this break-even point, double-pass is better. Carbide tools are
used to turn the carbon steel work material.
Gopalakrishnan & Khayyal (1991) described the design and development of an analytical tool for the selection
of machine parameters in turning. Geometric programming was used as the basic methodology to determine
values for feed rate and cutting speed that minimize the total cost of machining SAE 1045 steel with cemented
carbide tools of ISO P-10 grade. Surface finish and machine power were taken as the constraints while
optimizing cutting speed and feed rate for a given depth of cut. Agapiou (1992) formulated single-pass and
multi-pass machining operations. Production cost and total time were taken as objectives and a weighting factor
was assigned to prioritize the two objectives in the objective function. He optimized the number of passes, depth
of cut, cutting speed and feed rate in his model, through a multi-stage solution process called dynamic
programming. Several physical constraints were considered and applied in his model. In his solution
methodology, every cutting pass is independent of the previous pass, hence the optimality for each pass is not
reached simultaneously. Prasad et al (1997) reported the development of an optimization module for
determining process parameters for turning operations as part of a PC-based generative CAPP system. The work
piece materials considered in their study include steels, cast iron, aluminium, copper and brass. HSS and carbide
tool materials are considered in this study. The minimization of production time is taken as the basis for
formulating the objective function. The constraints considered in this study include power, surface finish,
tolerance, work piece rigidity, range of cutting speed, maximum and minimum depths of cut and total depth of
7
cut. Improved mathematical models are formulated by modifying the tolerance and work piece rigidity
constraints for multi-pass turning operations. The formulated models are solved by the combination of
geometric and linear programming techniques.
Process Optimization Tools : Many relate process optimization directly to use of statistical techniques to
identify the optimum solution. This is not true. Statistical techniques are definitely needed. However, a thorough
understanding of the process is required prior to committing time to optimize it. Over the years, many
methodologies have been developed for process optimization including Taguchi method, six sigma, lean
manufacturing and others. All of these begin by an exercise to create the process map.
Taguchi’s method:
Taguchi's techniques have been used widely in engineering design (Ross 1996 & Phadke 1989). The Taguchi
method contains system design, parameter design, and tolerance design procedures to achieve a robust process
and result for the best product quality (Taguchi 1987 & 1993). The main trust of Taguchi's techniques is the use
of parameter design (Ealey Lance A.1994), which is an engineering method for product or process design that
focuses on determining the parameter (factor) settings producing the best levels of a quality characteristic
(performance measure) with minimum variation. Taguchi designs provide a powerful and efficient method for
designing processes that operate consistently and optimally over a variety of conditions. To determine the best
design, it requires the use of a strategically designed experiment, which exposes the process to various levels of
design parameters. Experimental design methods were developed in the early years of 20th century and have
been extensively studied by statisticians since then, but they were not easy to use by practitioners (Phadke
1989). Taguchi's approach to design of experiments is easy to be adopted and applied for users with limited
knowledge of statistics; hence it has gained a wide popularity in the engineering and scientific community.
8
EXPERIMENTAL	METHODS	
Experiment Design:
A lathe machine has been used to cut a cylindrical mild steel block at different process parameters. The process
parameters that are affecting the characteristics of turned parts are 1) Cutting Tool parameters-Tool geometry
and Tool material. 2) Work piece related parameters –Hardness, Metallography. 3) Cutting parameters–Cutting
Speed, Feed, Depth of Cut. 4) Environmental parameters-Dry cutting, Wet cutting.
Figure 1. Lathe Machine Setup for machining operation
Figure 2. To setup the (a) Cutting speed (b) Feed
The following process parameters were selected for the present work: Cutting speed-(A), Feed-(B), Depth of
Cut-(C), Environment-Dry Cutting. The ranges of the selected process parameters were ascertained by
conducting preliminary experiments using one variable at a time approach.
The work material used for this experimentation is mild steel. This material is widely used in various machining
experiments.
Table 1: Chemical composition of mild steel
Element C Al Si Mn P Cu Balance
Wt% 0.16 0.07 0.168 0.18 0.025 0.09 Fe
9
Table 2: Mechanical Properties of mild steel
Brinell Hardness Number (BHN) 95
Density 7.85 X 103
kg/m3
% Elongation 20
Tensile Strength, Ultimate 250 MPa
Yield Strength 318 MPa
Poissons Ratio 0.3
Fatigue Strength 90 MPa
Shear Strength 200 Mpa
Experimental setup:
The various equipments/work piece and tool material used in performing the tests are listed below.
1. Lathe machine (LEBLOND REGAL LATHE)
2. Work materials (cylindrical block made of mild steel)
3. Tool materials
4. Surface roughness tester
5. Vernier Calipers
6. Micrometer
Surface roughness tester: Roughness is an important parameter when trying to find out whether a surface is
suitable for a certain purpose. Rough surfaces often wear out more quickly than smoother surfaces. Rougher
surfaces are normally more vulnerable to corrosion and cracks, but they can also aid in adhesion. A
roughness tester is used to quickly and accurately determine the surface texture or surface roughness of
a material. A roughness tester shows the measured roughness depth (Rz) as well as the mean roughness
value (Ra) in micrometers or microns (µm). Measuring the roughness of a surface involves applying a
roughness filter. Different international standards and surface texture or surface finish specifications
recommend the use of different roughness filters. For example, a Gaussian filter often is recommended
in ISO standards
Figure 3. Portable surface roughness tester
Table 3: Experimental Conditions
Work piece Material Mild steel
Length of the work piece 35 mm
10
Diameter of the work piece 25 mm
Lathe Used LEBLOND Regal Lathe
Environment DRY
TAGUCHI METHOD:
Genichi Taguchi is a Japanese Engineer who has been active in the improvement of Japans industrial product
and processes since the late 1940s. He has developed both philosophy and methodology for the process or
product quality improvement that depends mainly on statistical concepts and tools, especially statistically
designed experiments. Many Japanese firms achieved great success by applying his methods. Taguchi has
received some of the Japan’s most prestigious awards for quality achievement, including the Deming Prize.
During the year 1986, he received the most prestigious award from the International Technology Institute- The
W.F.Rockwell Medal for excellence in Technology. His major contribution is that he has combined engineering
and statistical techniques to achieve rapid improvements in reducing the cost and increasing the quality level by
optimizing product design and manufacturing processes. During 1983, Taguchi associated with top companies
and institutes of USA (Ford motor company, XEROX, AT&T, Bell laboratories etc), Taguchi techniques are
called as Radial approach to quality, experimental design and engineering. Taguchi technique refers to the
Parameter Design, Tolerance Design, Quality Loss Function, Design of Experiments using Orthogonal Arrays
and Methodology applied to evaluate measuring systems.
Pignatiello has identified two different aspects of Taguchi technique 1). The strategy of Taguchi. 2). Tactics of
Taguchi. Taguchi strategy is the conceptual framework for planning a process or product design experiment.
Taguchi tactics refer to the collection of specific techniques used by Taguchi.
Taguchi has addressed Design, Engineering (offline) as well as Manufacturing (online) quality. This concept
differentiates Taguchi technique from Statistical Process Control (SPC), which is purely an online quality
control technique. Taguchi ideas can be reduced into two fundamental concepts 1). Quality losses should be
defined as deviation from target, not conformance to arbitrary specifications. 2). To achieve high system quality
levels economically requires quality to be designed into product. Quality is designed, not manufactured, into the
product.
Taguchi techniques represent a new philosophy. Quality is measured by the deviation of a functional
characteristic from its target value. Noises (uncontrollable factors) will cause such deviations which results in
loss of Quality. Taguchi techniques seek to remove the effect of Noises. The most important part of the Taguchi
technique is quality loss function. Taguchi has found that a quadratic function (parabola) approximates the
behavior of loss in many cases when the quality characteristic of interest is to be maximized or minimized, the
loss function will become a half parabola. Loss occurs not only when the product is outside its specification but
also when product falls within its specification. Taguchi has recommended signal to noise ratio (S/N ratio) as
performance statistics. Signal refers to the change in quality characteristics of a product under investigation in
response to a factor introduced in the experimental design. Noise refers to the effect of external factors
(uncontrollable parameters) on the outcome of the quality characteristics.
Taguchi started to develop new methods to optimize the process of engineering experimentation. He believed
that the best way to improve quality was to design and build it into the product. He developed the techniques
which are now known as Taguchi Methods. His main contribution lies not in the mathematical formulation of
the design of experiments, but rather in the accompanying philosophy. His concepts produced a unique and
powerful quality improvement technique that differs from traditional practices. He developed manufacturing
systems that were “robust” or insensitive to daily and seasonal variations of environment, machine wear and
other external factors. The Taguchi approach to quality engineering places a great deal of emphasis on
minimizing variation as the main means of improving quality. The idea is to design products and processes
whose performance is not affected by outside conditions and to build this in during the development and design
stage through the use of experimental design. The method includes a set of tables that enable main variables and
interactions to be investigated in a minimum number of trials. Taguchi Method uses the idea of Fundamental
Functionality, which will facilitate people to identify the common goal because it will not change from case to
case and can provide a robust standard for widely and frequently changing situations. It is also pointed out that
the Taguchi Method is also very compatible with the human focused quality evaluation approaches that are
coming up.
Definition
Taguchi has envisaged a new method of conducting the design of experiments which are based on well defined
guidelines. This method uses a special set of arrays called orthogonal arrays. These standard arrays stipulates the
way of conducting the minimal number of experiments which could give the full information of all the factors
that affect the performance parameter. The crux of the orthogonal arrays method lies in choosing the level
11
combinations of the input design variables for each experiment.
A typical orthogonal array
While there are many standard orthogonal arrays available, each of the arrays is meant for a specific number of
independent design variables and levels . For example, if one wants to conduct an experiment to understand the
influence of 4 different independent variables with each variable having 3 set values ( level values), then an L9
orthogonal array might be the right choice. The L9 orthogonal array is meant for understanding the effect of 4
independent factors each having 3 factor level values. This array assumes that there is no interaction between
any two factor. While in many cases, no interaction model assumption is valid, there are some cases where there
is a clear evidence of interaction. A typical case of interaction would be the interaction between the material
properties and temperature.
There are totally 9 experiments to be conducted and each experiment is based on the combination of level values
as shown in the table.
Properties of an orthogonal array
The orthogonal arrays has the following special properties that reduces the number of experiments to be
conducted.
• The vertical column under each independent variables of the above table has a special combination of
level settings. All the level settings appears an equal number of times. For L9 array under variable 4 ,
level 1 , level 2 and level 3 appears thrice. This is called the balancing property of orthogonal arrays.
• All the level values of independent variables are used for conducting the experiments.
• The sequence of level values for conducting the experiments shall not be changed. This means one can
not conduct experiment 1 with variable 1, level 2 setup and experiment 4 with variable 1 , level 1 setup.
The reason for this is that the array of each factor columns are mutually orthogonal to any other column
of level values. The inner product of vectors corresponding to weights is zero. If the above 3 levels are
normalized between -1 and 1, then the weighing factors for level 1, level 2 , level 3 are -1 , 0 , 1
respectively. Hence the inner product of weighing factors of independent variable 1 and independent
variable 3 would be (-1 * -1+-1*0+-1*1)+(0*0+0*1+0*-1)+(1*0+1*1+1*-1)=0
Minimum number of experiments to be conducted
The design of experiments using the orthogonal array is, in most cases, efficient when compared to many other
statistical designs. The minimum number of experiments that are required to conduct the Taguchi method can be
calculated based on the degrees of freedom approach.
For example, in case of 8 independent variables study having 1 independent variable with 2 levels and
remaining 7 independent variables with 3 levels ( L18 orthogonal array) , the minimum number of experiments
required based on the above equation is 16. Because of the balancing property of the orthogonal arrays, the total
number of experiments shall be multiple of 2 and 3. Hence the number of experiments for the above case is 18.
Assumptions of the Taguchi method
The additive assumption implies that the individual or main effects of the independent variables on performance
parameter are separable. Under this assumption, the effect of each factor can be linear, quadratic or of higher
order, but the model assumes that there exists no cross product effects (interactions) among the individual
factors. That means the effect of independent variable 1 on performance parameter does not depend on the
different level settings of any other independent variables and vice versa. If at anytime, this assumption is
violated, then the additivity of the main effects does not hold, and the variables interact.
Designing an experiment
The design of an experiment involves the following steps
1 Selection of independent variables
12
2 Selection of number of level settings for each independent variable
3 Selection of orthogonal array
4 Assigning the independent variables to each column
5 Conducting the experiments
6 Analyzing the data
7 Inference
The details of the above steps are given below.
Selection of the independent variables
Before conducting the experiment, the knowledge of the product/process under investigation is of prime
importance for identifying the factors likely to influence the outcome. In order to compile a comprehensive list
of factors, the input to the experiment is generally obtained from all the people involved in the project.
Deciding the number of levels
Once the independent variables are decided, the number of levels for each variable is decided. The selection of
number of levels depends on how the performance parameter is affected due to different level settings. If the
performance parameter is a linear function of the independent variable, then the number of level setting shall be
2. However, if the independent variable is not linearly related, then one could go for 3, 4 or higher levels
depending on whether the relationship is quadratic, cubic or higher order.
In the absence of exact nature of relationship between the independent variable and the performance parameter,
one could choose 2 level settings. After analyzing the experimental data, one can decide whether the assumption
of level setting is right or not based on the percent contribution and the error calculations.
Selection of an orthogonal array
Before selecting the orthogonal array, the minimum number of experiments to be conducted shall be fixed based
on the total number of degrees of freedom [5] present in the study. The minimum number of experiments that
must be run to study the factors shall be more than the total degrees of freedom available. In counting the total
degrees of freedom the investigator commits 1 degree of freedom to the overall mean of the response under
study. The number of degrees of freedom associated with each factor under study equals one less than the
number of levels available for that factor. Hence the total degrees of freedom without interaction effect is 1 + as
already given by equation 2.1. For example, in case of 11 independent variables, each having 2 levels, the total
degrees of freedom are 12. Hence the selected orthogonal array shall have at least 12 experiments. An L12
orthogonal satisfies this requirement.
Once the minimum number of experiments is decided, the further selection of orthogonal array is based on the
number of independent variables and number of factor levels for each independent variable.
Assigning the independent variables to columns
The order in which the independent variables are assigned to the vertical column is very essential. In case of
mixed level variables and interaction between variables, the variables are to be assigned at right columns as
stipulated by the orthogonal array [3].
Finally, before conducting the experiment, the actual level values of each design variable shall be decided. It
shall be noted that the significance and the percent contribution of the independent variables changes depending
on the level values assigned. It is the designers responsibility to set proper level values.
Conducting the experiment
Once the orthogonal array is selected, the experiments are conducted as per the level combinations. It is
necessary that all the experiments be conducted. The interaction columns and dummy variable columns shall not
be considered for conducting the experiment, but are needed while analyzing the data to understand the
interaction effect. The performance parameter under study is noted down for each experiment to conduct the
sensitivity analysis.
Analysis of the data
13
Since each experiment is the combination of different factor levels, it is essential to segregate the individual
effect of independent variables. This can be done by summing up the performance parameter values for the
corresponding level settings. For example, in order to find out the main effect of level 1 setting of the
independent variable 2, sum the performance parameter values of the experiments 1, 4 and 7. Similarly for level
2, sum the experimental results of 2, 5 and 7 and so on.
Once the mean value of each level of a particular independent variable is calculated, the sum of square of
deviation of each of the mean value from the grand mean value is calculated. This sum of square deviation of a
particular variable indicates whether the performance parameter is sensitive to the change in level setting. If the
sum of square deviation is close to zero or insignificant, one may conclude that the design variables are not
influencing the performance of the process. In other words, by conducting the sensitivity analysis, and
performing analysis of variance (ANOVA), one can decide which independent factor dominates over other and
the percentage contribution of that particular independent variable. The details of analysis of variance are dealt.
Inference
From the above experimental analysis, it is clear that the higher the value of sum of square of an independent
variable, the more it has influence on the performance parameter. One can also calculate the ratio of individual
sum of square of a particular independent variable to the total sum of squares of all the variables. This ratio
gives the percent contribution of the independent variable on the performance parameter. In addition to above,
one could find the near optimal solution to the problem. This near optimum value may not be the global optimal
solution. However, the solution can be used as an initial / starting value for the standard optimization technique.
Table 4: Process parameters with their values at 3 levels
Process
Parameters
Parameter
Designation
Levels
L1 L2 L3
Speed (rpm) A 139 212 318
Feed (mm/rev) B 0.114 0.091 0.078
Depth of Cut (mm) C 0.317 0.635 0.952
The experimental layout was developed based on Taguchi’s Orthogonal Array Experimentation Technique.
An L9 Orthogonal Array Experimental layout was selected to satisfy the minimum number of experiment
conditions for the factors and levels presented in Table 5.
Table 5: Factors, Levels and Degrees of Freedom
Factor Code Factor No of Levels Degree of Freedom
A Speed 3 2
B Feed 3 2
C Depth of Cut 3 2
Total degrees of freedom 6
Minimum number of Experiments 7
Table 6: Standard L9 Orthogonal Array
Design of experiments:
Trial No. Speed (rpm) Feed (mm/rev) DOC (mm)
1 1 1 1
2 1 2 2
3 1 3 3
4 2 1 2
5 2 2 3
6 2 3 1
7 3 1 3
8 3 2 1
9 3 3 2
Table 7: Standard L9 Array with Observation
14
SURFACE	ROUGHNESS	ANALYSIS	
Surface roughness often shortened to roughness, is a component of surface texture. It is quantified by the
deviations in the direction of the normal vector of a real surface from its ideal form. If these deviations are large,
the surface is rough; if they are small, the surface is smooth. In surface metrology, roughness is typically
considered to be the high frequency, short-wavelength component of a measured surface. However, in practice it is
often necessary to know both the amplitude and frequency to ensure that a surface is fit for a purpose. Roughness
plays an important role in determining how a real object will interact with its environment. In tribology, rough
surfaces usually wear more quickly and have higher friction coefficients than smooth surfaces. Roughness is often
a good predictor of the performance of a mechanical component, since irregularities on the surface may form
nucleation sites for cracks or corrosion. On the other hand, roughness may promote adhesion. Generally speaking,
rather than scale specific descriptors, cross-scale descriptors such as surface fractality provide more meaningful
predictions of mechanical interactions at surfaces including contact stiffness and static friction.
A portable surface roughness tester is used after every trial to get the surface roughness value in µm.
The data observed for surface roughness for the 9 experiments are given below:
Trial No.
Speed
(rpm)
Feed
(mm/rev)
DOC
(mm)
SR (µm)
1
139 0.114 0.317
0.74
2
139 0.091 0.635
1.52
3
139 0.078 0.952
0.37
4
212 0.114 0.635
4.32
5
212 0.091 0.952
5.68
6
212 0.078 0.317
1.53
7
318 0.114 0.952
3.72
8
318 0.091 0.317
4.92
9
318 0.078 0.635
6.53
Regression analysis generates an equation to describe the statistical relationship between one or more predictor
variables and the response variable. After we use Minitab Statistical Software to fit a regression model, and verify
the fit by checking the residual plots, we will want to interpret the results. The p-value for each term tests the null
hypothesis that the coefficient is equal to zero (no effect). A low p-value (< 0.05) indicates that we can reject the
null hypothesis. In other words, a predictor that has a low p-value is likely to be a meaningful addition to our
model because changes in the predictor's value are related to changes in the response variable. Conversely, a larger
(insignificant) p-value suggests that changes in the predictor are not associated with changes in the response.
Using Minitab Software the Regression Model has been developed for the above Experiment. The regression
equation is
Table 8: General Linear Model for Surface Roughness Ra
Factor Type Levels Values
Speed Fixed 3 139, 212, 318
Feed Fixed 3 0.114,0.091,0.078
DOC Fixed 3 0.317,0.635,0.952
	
Ra (µm) = -8.41 + 0.029*Speed (rpm) +24.058*Feed (mm/rev) + 6.049*DOC (mm)
15
Material	Removal	Rate	Analysis	
Material Removal Rate (MRR), otherwise known as Metal Removal Rate, is the measurement for how much
material is removed from a part in a given period of time. The material removal rate can be calculated from the
volume of material removal or from the weight difference before and after machining. It is an indication of how
fast or slow the machining rate is and an important performance parameter in micro-EDM, as this is usually a very
slow process. Higher machining productivity must also be achieved with a desired accuracy and surface finish.
The MRR greatly depends on the process parameters. A higher value of discharging voltage, peak current, pulse
duration, duty cycle, and lower values of pulse interval can result in higher MRR. In addition to these electrical
parameters, other nonelectrical parameters and material properties have significant influence on MRR.
The MRR for turning is calculated as : MRR = Speed * Feed * DOC
The data observed for material removal rate for the 9 experiments are given below:
Trial No.
Speed
(rpm)
Feed
(mm/rev)
DOC
(mm)
MRR
(mm3
/min)
1
139 0.114 0.317
16.12
2
139 0.091 0.635
20.03
3
139 0.078 0.952
25.61
4
212 0.114 0.635
18.98
5
212 0.091 0.952
23.31
6
212 0.078 0.317
10.96
7
318 0.114 0.952
20.63
8
318 0.091 0.317
8.04
9
318 0.078 0.635
12.24
Using Minitab Software the Regression Model has been developed for the above Experiment. The regression
equation is
Table 9: General Linear Model for MRR
Factor Type Levels Values
Speed Fixed 3 139, 212, 318
Feed Fixed 3 0.114,0.091,0.078
DOC Fixed 3 0.317,0.635,0.952
	
	
	
	
MRR = 7.67 – 0.036*Speed (rpm) + 70 *Feed (mm/rev) + 17.26*DOC (mm)
16
Conclusion	
Following are the conclusions drawn based on the test conducted on the cylindrical mild steel block during turning
operation in a lathe machine.
1. From the results obtained a Regression Model has been developed for Surface Roughness and Material
Removal Rate. From this equation we can predict the value of Surface Roughness and MRR if the values
of Cutting Speed, Feed and Depth of Cut are known.
2. The validation experiment confirmed that the error occurred was less than 2.0 % between equation and
actual value.
3. The optimal settings of process parameters for optimal Surface Roughness and MRR are:
Speed (139 rpm), Feed (0.078 mm/rev), and DOC (0.952 mm)
The SR and MRR for these conditions are found to be 0.37 µm and 25.61 mm3
/min respectively.
This research gives us how to use Taguchi’s parameter design to obtain optimum condition with lowest cost,
minimum number of experiments and Industrial Engineers can use this method. The research can be extended for
other materials using Tool Nose Radius, Lubricant, and Material Hardness etc as parameters.
REFERENCES	
	
1 Kansal, H. K., Singh, S., and Kumar, P. Effect of silicon powder mixed EDM on machining rate of AISI D2 die
steel. J. Manuf. Process., 2007, 9, 13–21.
2 Kumar, S., Singh, R., Singh, T. P., and Sethi, B. L. Comparison of material transfer in electrical discharge
machining of AISI H13 die steel. Proc. Inst. Mech. Engrs, Part C: J. Mech. Engng Sci., 2009, 223(7), 1733–1740.
3 Uno, Y., Okada, A., and Cetin, S. Surface modification of EDMed surface with powder mixed fluid.
Proc. 2nd Int. Conf. on Design and Production of Dies and Molds, 2001.
4 Pecas, P. and Henriques, E. Effect of powder concentration and dielectric flow in the surface morphology in
electrical discharge machining with powder mixed dielectric (PMD-EDM). Int. J. Adv. Manuf. Technol., 2008, 37,
1120–1132.
5 Pecas, P. and Henriques, E. Influence of silicon powder-mixed dielectric on conventional electrical discharge
machining. Int. J. Machine Tools Manuf., 2003, 43, 1465–1471.
6 Wong, Y. S., Lim, L. C., Rahuman, I., and Tee, W. M. Near-mirror-finish phenomena in EDM using powder-
mixed dielectric. J. Mater. Process. Technol., 1998, 79, 30–40.
7 Wu, K. L., Yan, B. H., Huang, F. Y., and Chen, S. C. Improvement of surface finish on SKD steel using electro-
discharge machining with aluminum and surfactant added dielectric. Int. J. Machine Tools Manuf., 2005, 45,
1195–1201.
8 Jeswani, M. L. Effect of the addition of graphite powder to kerosene used as a dielectric fluid in electrical
discharge machining. Int. J. Mater. Process. Technol., 1981, 70, 133–139.
9 Furutani, K., Sanetoa, A., Takezawaa, M. N., and Miyakeb, H. Accretion of titanium carbide by electrical
discharge machining with powder suspended in working fluid. J. Int. Soc. Precision Engng Nanotech., 2001, 25,
138–144.
10 C¸og˘un, C., O¨ zerkan, B., and KaraC¸ ay, T. An experimental investigation on the effect of powder mixed
dielectric on machining performance in electric discharge machining. Proc. Inst. Mech. Engrs, Part B: J. Engng
Manuf., 2006, 220(7), 1035–1050.
11 Kozak, J., Rozenek, M., and Dabrowski, L. Study of electrical discharge machining using powder suspended
working media. Proc. Inst. Mech. Engrs, Part B: J. Engng Manuf., 2003, 217(11), 1597–1602.

More Related Content

PPTX
Sand Casting
PPTX
Milling machine(husain)
PDF
Lecture 1 manufacturing processes
PPT
Lecture 01 introduction to manufacturing
PPTX
01-part1-01 Turning Process
PPTX
Hot Rolling And cold rolling process
PPTX
Types of grinding machines
Sand Casting
Milling machine(husain)
Lecture 1 manufacturing processes
Lecture 01 introduction to manufacturing
01-part1-01 Turning Process
Hot Rolling And cold rolling process
Types of grinding machines

What's hot (20)

PDF
Milling Fixture
PPTX
Vibratory conveyor
PPTX
Machining processes and types
PPTX
Extrusion defects
PPTX
Design considerations and engineering materials
PPTX
PPT on Milling
PPTX
cutting operation manufacturing process
PDF
METAL FORMING PROCESS
PPTX
Power hacksaw machine
PPTX
Fasteners Presentation
PPTX
Cutting power & Energy Consideration in metal cutting
PPTX
Abrasive water jet machining
PPTX
Tool holding devices
PDF
Punching and Blanking Process (Sheet Metal Forming)
PPTX
Rolling Process
PDF
4.Merchant’s circle diagram.pdf
PPTX
PPTX
PPT on Lathe machine
PPTX
Non traditional machining processes
PPTX
Lecture 3 Material Handling Equipment (Hoisting Equipment)
Milling Fixture
Vibratory conveyor
Machining processes and types
Extrusion defects
Design considerations and engineering materials
PPT on Milling
cutting operation manufacturing process
METAL FORMING PROCESS
Power hacksaw machine
Fasteners Presentation
Cutting power & Energy Consideration in metal cutting
Abrasive water jet machining
Tool holding devices
Punching and Blanking Process (Sheet Metal Forming)
Rolling Process
4.Merchant’s circle diagram.pdf
PPT on Lathe machine
Non traditional machining processes
Lecture 3 Material Handling Equipment (Hoisting Equipment)
Ad

Similar to Optimizing Material removal rate and surface roughness using Taguchi technique (20)

PDF
IRJET- Determining the Effect of Cutting Parameters in CNC Turning
PDF
IRJET- Taguchi Optimization of Cutting Parameters for Surface Roughness and M...
PPTX
Experimental investigation of ohns surface property and process parameter on ...
PDF
Operation management information_system
PDF
Operation management information_system
PDF
Operation management information_system
PPT
CNC Machines
PPTX
Lecture 2 Machining Introduction powerpoint
PDF
Full factorial method for optimization of process parameters for surface roug...
PDF
Machining
DOC
Chapter7b machining turning(1)
DOC
Chapter7b machining turning
PPTX
Unibshhstshshjs by yzzbsbdygzbsnzbz1.pptx
PPTX
Metal cutting operations and terminology.pptx
DOCX
Traditional machining
PDF
Experimental Investigation and Parametric Studies of Surface Roughness Analys...
PDF
Welcome to International Journal of Engineering Research and Development (IJERD)
PDF
IRJET- Review Paper Optimizationof MachiningParametersbyusing of Taguchi'...
IRJET- Determining the Effect of Cutting Parameters in CNC Turning
IRJET- Taguchi Optimization of Cutting Parameters for Surface Roughness and M...
Experimental investigation of ohns surface property and process parameter on ...
Operation management information_system
Operation management information_system
Operation management information_system
CNC Machines
Lecture 2 Machining Introduction powerpoint
Full factorial method for optimization of process parameters for surface roug...
Machining
Chapter7b machining turning(1)
Chapter7b machining turning
Unibshhstshshjs by yzzbsbdygzbsnzbz1.pptx
Metal cutting operations and terminology.pptx
Traditional machining
Experimental Investigation and Parametric Studies of Surface Roughness Analys...
Welcome to International Journal of Engineering Research and Development (IJERD)
IRJET- Review Paper Optimizationof MachiningParametersbyusing of Taguchi'...
Ad

Recently uploaded (20)

PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PPTX
bas. eng. economics group 4 presentation 1.pptx
PDF
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
PPTX
Current and future trends in Computer Vision.pptx
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPT
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PPTX
Geodesy 1.pptx...............................................
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
additive manufacturing of ss316l using mig welding
PPTX
web development for engineering and engineering
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Model Code of Practice - Construction Work - 21102022 .pdf
bas. eng. economics group 4 presentation 1.pptx
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
Current and future trends in Computer Vision.pptx
Embodied AI: Ushering in the Next Era of Intelligent Systems
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
Geodesy 1.pptx...............................................
CH1 Production IntroductoryConcepts.pptx
Operating System & Kernel Study Guide-1 - converted.pdf
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
additive manufacturing of ss316l using mig welding
web development for engineering and engineering
Foundation to blockchain - A guide to Blockchain Tech
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks

Optimizing Material removal rate and surface roughness using Taguchi technique

  • 1. 1 OPTIMIZING MATERIAL REMOVAL RATE AND SURFACE ROUGHNESS IN TURNING OPERATION USING TAGUCHI TECHNIQUE ADVANCED DESIGN PROJECT II R A H U L R O Y B M E I V B 2 R O L L N O . 1 0 1 4 1 1 2 0 1 0 5 3 Prepared by: . . . .
  • 2. 2 INDEX INTRODUCTION___________________________________________________________________________3 LITERATURE SURVEY_____________________________________________________________________6 PROCESS OPTIMIZATION____________________________________________________________6 TAGUCHI’S METHOD________________________________________________________________6 EXPERIMENTAL METHODS________________________________________________________________8 EXPERIMENTAL SETUP______________________________________________________________9 DESIGN OF EXPERIMENTS___________________________________________________________13 SURFACE ROUGHNESS ANALYSIS__________________________________________________________14 MATERIAL REMOVAL RATE ANALYSIS____________________________________________________15 CONCLUSION______________________________________________________________________________16 REFERENCES______________________________________________________________________________16
  • 3. 3 INTRODUCTION The objective of this advanced design project is to obtain an optimal setting of turning parameters (Cutting speed, Feed and Depth of Cut), which results in an optimal value of material removal rate (MRR) while machining a cylindrical bar made of mild steel. In our study, an attempt has been made to generate a model to predict material removal rate using Regression Technique. Also an attempt has been made to optimize the process parameters using Taguchi Technique. A three level orthogonal array L9 was selected to satisfy the minimum number of experiment conditions for the factors and levels presented in this project. From past so many years it has been recognized that conditions during machining such as Cutting speed, Feed and Depth of Cut (DOC) should be selected to optimize the economics of machining operations. Manufacturing industries in developing countries suffer from a major drawback of not running the machine at their optimal operating conditions. Machining industries are dependent on the experience and skills of the machine tool operators for optimal selection of cutting conditions. In machining industries the practice of using handbook based conservative cutting conditions are in progress at the process planning level. The disadvantage of this unscientific practice is the decrease in productivity due to sub optimal use of machining capability. The literature survey has revealed that several researchers attempted to calculate the optimal cutting conditions in turning operations. Armarego and Brown used the concept of maxima / minima of differential calculus to optimize machining variable in turning operation. Brewer and Rueda have developed different monograms, which assist in the selection of optimum conditions. Some of the other techniques, which have been used to optimize the machining parameters, include goal programming and geometrical programming. Now a day more attention is given to MRR and Material removal rate of the product in the industries. Material removal rate is the most important criteria in determining the machinability of the material. Material removal rate and material removal rate are the major factors needed to predict the machining performances of any machining operation. Most of the material removal rate prediction models are empirical and they are generally based on experiments conducted in the laboratory. Also it is difficult in practice, to keep all factors under control as required to obtain the reproducible results. Optimization of machining parameters increases the utility for machining economics and also increases the product quality to greater extent. Machining is a subtractive manufacturing process that is used for removing material controllably from a piece of raw material. The objective of this process is to produce final product with proper shape and controlled dimension from the workpiece (raw material). In order to manufacture product with accurate dimension, proper tolerance and acceptable surface finish, several things need to be considered during machining such as, tool material, workpiece material, type of machining process, etc. Relative motion between the tool and workpiece is the primary reason behind all machining process that causes material to be removed from the workpiece, however, tool shape and tool penetration into the workpiece are also responsible for this. Relative motion between the tool and workpiece is achieved by the combined action of the primary motion called ‘cutting speed’ and secondary motion called ‘feed speed.’ Modern days’ machining process is mostly carried out by computer numerical control (CNC) machine in which software based automatic control system carries out the movement and operation of the machine tools. Turning is a form of machining, a material removal process, which is used to create rotational parts by cutting away unwanted material. The turning process requires a turning machine or lathe, workpiece, fixture, and cutting tool. The workpiece is a piece of pre-shaped material that is secured to the fixture, which itself is attached to the turning machine, and allowed to rotate at high speeds. The cutter is typically a single-point cutting tool that is also secured in the machine, although some operations make use of multi-point tools. The cutting tool feeds into the rotating workpiece and cuts away material in the form of small chips to create the desired shape. Turning is used to produce rotational, typically axi-symmetric, parts that have many features, such as holes, grooves, threads, tapers, various diameter steps, and even contoured surfaces. Parts that are fabricated completely through turning often include components that are used in limited quantities, perhaps for prototypes, such as custom designed shafts and fasteners. Turning is also commonly used as a secondary process to add or refine features on parts that were manufactured using a different process. Due to the high tolerances and surface finishes that turning can offer, it is ideal for adding precision rotational features to a part whose basic shape has already been formed. Cutting parameters In turning, the speed and motion of the cutting tool is specified through several parameters. These parameters are selected for each operation based upon the workpiece material, tool material, tool size, and more. • Cutting feed - The distance that the cutting tool or workpiece advances during one revolution of the spindle, measured in inches per revolution (IPR). In some operations the tool feeds into the workpiece and in others the workpiece feeds into the tool. For a multi-point tool, the cutting feed is also equal to the
  • 4. 4 feed per tooth, measured in inches per tooth (IPT), multiplied by the number of teeth on the cutting tool. • Cutting speed - The speed of the workpiece surface relative to the edge of the cutting tool during a cut, measured in surface feet per minute (SFM). • Spindle speed - The rotational speed of the spindle and the workpiece in revolutions per minute (RPM). The spindle speed is equal to the cutting speed divided by the circumference of the workpiece where the cut is being made. In order to maintain a constant cutting speed, the spindle speed must vary based on the diameter of the cut. If the spindle speed is held constant, then the cutting speed will vary. • Feed rate - The speed of the cutting tool's movement relative to the workpiece as the tool makes a cut. The feed rate is measured in inches per minute (IPM) and is the product of the cutting feed (IPR) and the spindle speed (RPM). • Axial depth of cut - The depth of the tool along the axis of the workpiece as it makes a cut, as in a facing operation. A large axial depth of cut will require a low feed rate, or else it will result in a high load on the tool and reduce the tool life. Therefore, a feature is typically machined in several passes as the tool moves to the specified axial depth of cut for each pass. In simple terms, a lathe machine removes material from a workpiece with the goal of achieving the preferred size and shape. Ultimately, the machine holds a wood or metal workpiece so that through grooving, chamfering, facing, turning, forming, and so on, the product comes out to the customer’s specifications. As you can imagine, there are many reasons to use a lathe machine. For example, furniture manufacturers use this type of machine to hold pieces of wood in place. Throughout the lathing process, the once block of wood transforms into a gorgeous finished table leg. The working of a lathe machine also involves metals such as aluminum. In this case, the machine might hold metal that becomes a shaft for the aerospace or automotive industry. You should also note that there are different types of lathe machines, each used for a particular purpose. Some examples of these include a multi-spindle lathe, CNC lathe, a combination lathe, a turret lathe, and a center lathe. While each machine performs in much the same way, there are differences in parts and uses. Machining is a term used to describe a variety of material removal processes in which a cutting tool removes unwanted material from a workpiece to produce the desired shape. The workpiece is typically cut from a larger piece of stock, which is available in a variety of standard shapes, such as flat sheets, solid bars, hollow tubes, and shaped beams. Machining can also be performed on an existing part, such as a casting or forging. Parts that are machined from a pre-shaped workpiece are typically cubic or cylindrical in their overall shape, but their individual features may be quite complex. Machining can be used to create a variety of features including holes, slots, pockets, flat surfaces, and even complex surface contours. Also, while machined parts are typically metal, almost all materials can be machined, including metals, plastics, composites, and wood. For these reasons, machining is often considered the most common and versatile of all manufacturing processes. As a material removal process, machining is inherently not the most economical choice for a primary manufacturing process. Material, which has been paid for, is cut away and discarded to achieve the final part. Also, despite the low setup and tooling costs, long machining times may be required and therefore be cost prohibitive for large quantities. As a result, machining is most often used for limited quantities as in the fabrication of prototypes or custom tooling for other manufacturing processes. Machining is also very commonly used as a secondary process, where minimal material is removed and the cycle time is short. Due to the high tolerance and surface finishes that machining offers, it is often used to add or refine precision features to an existing part or smooth a surface to a fine finish. As mentioned above, machining includes a variety of processes that each removes material from an initial workpiece or part. The most common material removal processes, sometimes referred to as conventional or traditional machining, are those that mechanically cut away small chips of material using a sharp tool. Non- conventional machining processes may use chemical or thermal means of removing material. Conventional machining processes are often placed in three categories - single point cutting, multi-point cutting, and abrasive machining. Each process in these categories is uniquely defined by the type of cutting tool used and the general motion of that tool and the workpiece. However, within a given process a variety of operations can be performed, each utilizing a specific type of tool and cutting motion. The machining of a part will typically require a variety of operations that are performed in a carefully planned sequence to create the desired features. Turning is used to produce rotational, typically axi-symmetric, parts that have many features, such as holes, grooves, threads, tapers, various diameter steps, and even contoured surfaces. Parts that are fabricated completely through turning often include components that are used in limited quantities, perhaps for prototypes, such as custom designed shafts and fasteners. Turning is also commonly used as a secondary process to add or refine features on parts that were manufactured using a different process. Due to the high tolerances and surface finishes that turning can offer, it is ideal for adding precision rotational features to a part whose basic shape has already been formed. Turning is a form of machining, a material removal process, which is used to create rotational parts by cutting away unwanted material. The turning process requires a turning machine or lathe, workpiece, fixture, and cutting tool. The workpiece is a piece of pre-shaped material that is secured to the fixture, which itself is attached to the turning machine, and allowed to rotate at high speeds. The cutter is
  • 5. 5 typically a single-point cutting tool that is also secured in the machine, although some operations make use of multi-point tools. The cutting tool feeds into the rotating workpiece and cuts away material in the form of small chips to create the desired shape. Main Parts of the Lathe Machine The following are the primary components associated with a lathe machine. Apron – Consists of all mechanisms that control and move the carriage Bed – Main body of the lathe machine onto which all primary components bolt Carriage – Holds and moves the tool post vertically and horizontally on the bed Chips Pan – Carries all chips removed from the workpiece Chuck – Holds the workspace and is bolted on the spindle that rotates both the chuck and workpiece Guide Ways – Handles the movement of the carriage and tailstock on the bed Head Stock – Services as a holding device for the spindle, gear chain, driving pulley, and more Lead Screw – Automatically moves the carriage during the thread cutting process Legs – Carries the load of the machine Speed Controller – Controls the spindle’s speed Spindle – Holds and rotates the chuck Tool Post – Holds the tool at a precise position Tail Stock – Supports the job as needed and is used for drilling operations Taguchi has proposed off line for quality improvement in place of an attempt to inspect quality in the product on the product line. He observed that no amount of an inspection can put quality back into the product but it merely treats a symptom. Taguchi has recommended three-stage process to achieve the desirable product quality by design. They are 1). System Design 2). Parameter Design and 3). Tolerance Design. It produces the best performance of the product or process under study. The optimal condition is selected so that influence of noise factors (uncontrollable factors) causes minimum variation to study performances. The orthogonal arrays, variance and signal to noise analysis are the essential tools of parameter design. Tolerance design is used to fine- tune the results of parameter design by tightening the tolerance of the parameter with the significant influence on the product. 8-STEPS IN TAGUCHI METHODOLOGY: Step-1: IDENTIFY THE MAIN FUNCTION, SIDE EFFECTS, AND FAILURE MODE Step-2: IDENTIFY THE NOISE FACTORS Step-3: IDENTIFY THE OBJECTIVE FUNCTION TO BE OPTIMIZED Step-4: IDENTIFY THE CONTROL FACTORS AND THEIR LEVELS Step-5: SELECT THE ORTHOGONAL ARRAY MATRIX EXPERIMENT Step-6: CONDUCT THE MATRIX EXPERIMENT Step-7: ANALYZE THE DATA, PREDICT THE OPTIMUM LEVELS AND PERFORMANCE Step-8: PERFORM THE VERIFICATION EXPERIMENT AND PLAN THE FUTURE ACTION In Taguchi Method, the word "optimization" implies "determination of BEST levels of control factors". In turn, the BEST levels of control factors are those that maximize the Signal-to-Noise ratios. The Signal-to-Noise ratios are log functions of desired output characteristics. The experiments, that are conducted to determine the BEST levels, are based on "Orthogonal Arrays", are balanced with respect to all control factors and yet are minimum in number. This in turn implies that the resources (materials and time) required for the experiments are also minimum. Taguchi method divides all problems into 2 categories - STATIC or DYNAMIC. While the Dynamic problems have a SIGNAL factor, the Static problems do not have any signal factor. In Static problems, using 3 Signal-to-Noise ratios - smaller the better, LARGER-THE-BETTER and nominal-the-best achieve the optimization. In Dynamic problems, the optimization is achieved by using 2 Signal-to-Noise ratios - Slope and Linearity. Taguchi Method is a process/product optimization method that is based on 8-steps of planning, conducting and evaluating results of matrix experiments to determine the best levels of control factors. The primary goal is to keep the variance in the output very low even in the presence of noise inputs. Thus, the processes/products are made ROBUST against all variations. In this paper Taguchi DOE (Design of Experiments) approach is used to analyze the effect of turning process parameters that is Cutting Speed, Feed and Depth of Cut on material removal rate while machining on mild steel and to obtain an optimal setting of these parameters that may result in optimizing material removal rate.
  • 6. 6 LITERATURE SURVEY Process Optimization Process optimization is the discipline of adjusting a process to optimize some specified set of parameters without violating some constraint. The most common goals are minimizing cost, maximizing throughout, and/or efficiency. This is one of the major quantitative tools in industrial decision-making. When optimizing a process, the goal is to maximize one or more of the process specifications, while keeping all others within their constraints. Traditionally, the selection of cutting conditions for metal cutting is left to the machine operator. In such cases, the experience of the operator plays a major role, but even for a skilled operator it is very difficult to attain the optimum values each time. Machining parameters in metal turning are cutting speed, feed rate and depth of cut. The setting of these parameters determines the quality characteristics of turned parts. Following the pioneering work of Taylor (1907) and his famous tool life equation, different analytical and experimental approaches for the optimization of machining parameters have been investigated. Gilbert (1950) studied the optimization of machining parameters in turning with respect to maximum production rate and minimum production cost as criteria. Armarego & Brown (1969) investigated unconstrained machine-parameter optimization using differential calculus. Brewer & Rueda (1963) carried out simplified optimum analysis for non-ferrous materials. For cast iron (CI) and steels, they employed the criterion of reducing the machining cost to a minimum. A number of nomograms were worked out to facilitate the practical determination of the most economic machining conditions. They pointed out that the more- difficult-to-machine materials have a restricted range of parameters over which machining can be carried out and thus any attempt at optimizing their costs is artificial. Brewer (1966) suggested the use of Lagrangian multipliers for optimization of the con- strained problem of unit cost, with cutting power as the main constraint. Bhattacharya et al (1970) optimized the unit cost for turning, subject to the constraints of surface roughness and cutting power by the use of Lagrange’s method. Walvekar & Lambert (1970) discussed the use of geometric programming to selection of machining variables. They optimized cut- ting speed and feed rate to yield minimum production cost. Petropoulos (1973) investigated optimal selection of machining rate variables, viz. cutting speed and feed rate, by geometric programming. A constrained unit cost problem in turning was optimized by machining SAE 1045 steel with a cemented carbide tool of ISO P-10 grade. Sundaram (1978) applied a goal-programming technique in metal cutting for selecting levels of machining parameters in a fine turning operation on AISI 4140 steel using cemented tungsten carbide tools. Ermer & Kromodiharajo (1981) developed a multi-step mathematical. They concluded that in some cases with certain constant total depths of cut, multi-pass machining was more economical than single-pass machining, if depth of cut for each pass was properly allocated. They used high speed steel (HSS) cutting tools to machine carbon steel. Hinduja et al (1985) described a procedure to calculate the optimum cutting conditions for turning operations with minimum cost or maximum production rate as the objective function. For a given combination of tool and work material, the search for the optimum was confined to a feed rate versus depth-of- cut plane defined by the chip-breaking constraint. Some of the other constraints considered include power available, work holding, surface finish and dimensional accuracy. Tsai (1986) studied the relationship between the multi-pass machining and single-pass machining. He presented the concept of a break-even point, i.e. there is always a point, a certain value of depth of cut, at which single-pass and double-pass machining are equally effective. When the depth of cut drops below the break-even point, the single-pass is more economical than the double-pass, and when the depth of cut rises above this break-even point, double-pass is better. Carbide tools are used to turn the carbon steel work material. Gopalakrishnan & Khayyal (1991) described the design and development of an analytical tool for the selection of machine parameters in turning. Geometric programming was used as the basic methodology to determine values for feed rate and cutting speed that minimize the total cost of machining SAE 1045 steel with cemented carbide tools of ISO P-10 grade. Surface finish and machine power were taken as the constraints while optimizing cutting speed and feed rate for a given depth of cut. Agapiou (1992) formulated single-pass and multi-pass machining operations. Production cost and total time were taken as objectives and a weighting factor was assigned to prioritize the two objectives in the objective function. He optimized the number of passes, depth of cut, cutting speed and feed rate in his model, through a multi-stage solution process called dynamic programming. Several physical constraints were considered and applied in his model. In his solution methodology, every cutting pass is independent of the previous pass, hence the optimality for each pass is not reached simultaneously. Prasad et al (1997) reported the development of an optimization module for determining process parameters for turning operations as part of a PC-based generative CAPP system. The work piece materials considered in their study include steels, cast iron, aluminium, copper and brass. HSS and carbide tool materials are considered in this study. The minimization of production time is taken as the basis for formulating the objective function. The constraints considered in this study include power, surface finish, tolerance, work piece rigidity, range of cutting speed, maximum and minimum depths of cut and total depth of
  • 7. 7 cut. Improved mathematical models are formulated by modifying the tolerance and work piece rigidity constraints for multi-pass turning operations. The formulated models are solved by the combination of geometric and linear programming techniques. Process Optimization Tools : Many relate process optimization directly to use of statistical techniques to identify the optimum solution. This is not true. Statistical techniques are definitely needed. However, a thorough understanding of the process is required prior to committing time to optimize it. Over the years, many methodologies have been developed for process optimization including Taguchi method, six sigma, lean manufacturing and others. All of these begin by an exercise to create the process map. Taguchi’s method: Taguchi's techniques have been used widely in engineering design (Ross 1996 & Phadke 1989). The Taguchi method contains system design, parameter design, and tolerance design procedures to achieve a robust process and result for the best product quality (Taguchi 1987 & 1993). The main trust of Taguchi's techniques is the use of parameter design (Ealey Lance A.1994), which is an engineering method for product or process design that focuses on determining the parameter (factor) settings producing the best levels of a quality characteristic (performance measure) with minimum variation. Taguchi designs provide a powerful and efficient method for designing processes that operate consistently and optimally over a variety of conditions. To determine the best design, it requires the use of a strategically designed experiment, which exposes the process to various levels of design parameters. Experimental design methods were developed in the early years of 20th century and have been extensively studied by statisticians since then, but they were not easy to use by practitioners (Phadke 1989). Taguchi's approach to design of experiments is easy to be adopted and applied for users with limited knowledge of statistics; hence it has gained a wide popularity in the engineering and scientific community.
  • 8. 8 EXPERIMENTAL METHODS Experiment Design: A lathe machine has been used to cut a cylindrical mild steel block at different process parameters. The process parameters that are affecting the characteristics of turned parts are 1) Cutting Tool parameters-Tool geometry and Tool material. 2) Work piece related parameters –Hardness, Metallography. 3) Cutting parameters–Cutting Speed, Feed, Depth of Cut. 4) Environmental parameters-Dry cutting, Wet cutting. Figure 1. Lathe Machine Setup for machining operation Figure 2. To setup the (a) Cutting speed (b) Feed The following process parameters were selected for the present work: Cutting speed-(A), Feed-(B), Depth of Cut-(C), Environment-Dry Cutting. The ranges of the selected process parameters were ascertained by conducting preliminary experiments using one variable at a time approach. The work material used for this experimentation is mild steel. This material is widely used in various machining experiments. Table 1: Chemical composition of mild steel Element C Al Si Mn P Cu Balance Wt% 0.16 0.07 0.168 0.18 0.025 0.09 Fe
  • 9. 9 Table 2: Mechanical Properties of mild steel Brinell Hardness Number (BHN) 95 Density 7.85 X 103 kg/m3 % Elongation 20 Tensile Strength, Ultimate 250 MPa Yield Strength 318 MPa Poissons Ratio 0.3 Fatigue Strength 90 MPa Shear Strength 200 Mpa Experimental setup: The various equipments/work piece and tool material used in performing the tests are listed below. 1. Lathe machine (LEBLOND REGAL LATHE) 2. Work materials (cylindrical block made of mild steel) 3. Tool materials 4. Surface roughness tester 5. Vernier Calipers 6. Micrometer Surface roughness tester: Roughness is an important parameter when trying to find out whether a surface is suitable for a certain purpose. Rough surfaces often wear out more quickly than smoother surfaces. Rougher surfaces are normally more vulnerable to corrosion and cracks, but they can also aid in adhesion. A roughness tester is used to quickly and accurately determine the surface texture or surface roughness of a material. A roughness tester shows the measured roughness depth (Rz) as well as the mean roughness value (Ra) in micrometers or microns (µm). Measuring the roughness of a surface involves applying a roughness filter. Different international standards and surface texture or surface finish specifications recommend the use of different roughness filters. For example, a Gaussian filter often is recommended in ISO standards Figure 3. Portable surface roughness tester Table 3: Experimental Conditions Work piece Material Mild steel Length of the work piece 35 mm
  • 10. 10 Diameter of the work piece 25 mm Lathe Used LEBLOND Regal Lathe Environment DRY TAGUCHI METHOD: Genichi Taguchi is a Japanese Engineer who has been active in the improvement of Japans industrial product and processes since the late 1940s. He has developed both philosophy and methodology for the process or product quality improvement that depends mainly on statistical concepts and tools, especially statistically designed experiments. Many Japanese firms achieved great success by applying his methods. Taguchi has received some of the Japan’s most prestigious awards for quality achievement, including the Deming Prize. During the year 1986, he received the most prestigious award from the International Technology Institute- The W.F.Rockwell Medal for excellence in Technology. His major contribution is that he has combined engineering and statistical techniques to achieve rapid improvements in reducing the cost and increasing the quality level by optimizing product design and manufacturing processes. During 1983, Taguchi associated with top companies and institutes of USA (Ford motor company, XEROX, AT&T, Bell laboratories etc), Taguchi techniques are called as Radial approach to quality, experimental design and engineering. Taguchi technique refers to the Parameter Design, Tolerance Design, Quality Loss Function, Design of Experiments using Orthogonal Arrays and Methodology applied to evaluate measuring systems. Pignatiello has identified two different aspects of Taguchi technique 1). The strategy of Taguchi. 2). Tactics of Taguchi. Taguchi strategy is the conceptual framework for planning a process or product design experiment. Taguchi tactics refer to the collection of specific techniques used by Taguchi. Taguchi has addressed Design, Engineering (offline) as well as Manufacturing (online) quality. This concept differentiates Taguchi technique from Statistical Process Control (SPC), which is purely an online quality control technique. Taguchi ideas can be reduced into two fundamental concepts 1). Quality losses should be defined as deviation from target, not conformance to arbitrary specifications. 2). To achieve high system quality levels economically requires quality to be designed into product. Quality is designed, not manufactured, into the product. Taguchi techniques represent a new philosophy. Quality is measured by the deviation of a functional characteristic from its target value. Noises (uncontrollable factors) will cause such deviations which results in loss of Quality. Taguchi techniques seek to remove the effect of Noises. The most important part of the Taguchi technique is quality loss function. Taguchi has found that a quadratic function (parabola) approximates the behavior of loss in many cases when the quality characteristic of interest is to be maximized or minimized, the loss function will become a half parabola. Loss occurs not only when the product is outside its specification but also when product falls within its specification. Taguchi has recommended signal to noise ratio (S/N ratio) as performance statistics. Signal refers to the change in quality characteristics of a product under investigation in response to a factor introduced in the experimental design. Noise refers to the effect of external factors (uncontrollable parameters) on the outcome of the quality characteristics. Taguchi started to develop new methods to optimize the process of engineering experimentation. He believed that the best way to improve quality was to design and build it into the product. He developed the techniques which are now known as Taguchi Methods. His main contribution lies not in the mathematical formulation of the design of experiments, but rather in the accompanying philosophy. His concepts produced a unique and powerful quality improvement technique that differs from traditional practices. He developed manufacturing systems that were “robust” or insensitive to daily and seasonal variations of environment, machine wear and other external factors. The Taguchi approach to quality engineering places a great deal of emphasis on minimizing variation as the main means of improving quality. The idea is to design products and processes whose performance is not affected by outside conditions and to build this in during the development and design stage through the use of experimental design. The method includes a set of tables that enable main variables and interactions to be investigated in a minimum number of trials. Taguchi Method uses the idea of Fundamental Functionality, which will facilitate people to identify the common goal because it will not change from case to case and can provide a robust standard for widely and frequently changing situations. It is also pointed out that the Taguchi Method is also very compatible with the human focused quality evaluation approaches that are coming up. Definition Taguchi has envisaged a new method of conducting the design of experiments which are based on well defined guidelines. This method uses a special set of arrays called orthogonal arrays. These standard arrays stipulates the way of conducting the minimal number of experiments which could give the full information of all the factors that affect the performance parameter. The crux of the orthogonal arrays method lies in choosing the level
  • 11. 11 combinations of the input design variables for each experiment. A typical orthogonal array While there are many standard orthogonal arrays available, each of the arrays is meant for a specific number of independent design variables and levels . For example, if one wants to conduct an experiment to understand the influence of 4 different independent variables with each variable having 3 set values ( level values), then an L9 orthogonal array might be the right choice. The L9 orthogonal array is meant for understanding the effect of 4 independent factors each having 3 factor level values. This array assumes that there is no interaction between any two factor. While in many cases, no interaction model assumption is valid, there are some cases where there is a clear evidence of interaction. A typical case of interaction would be the interaction between the material properties and temperature. There are totally 9 experiments to be conducted and each experiment is based on the combination of level values as shown in the table. Properties of an orthogonal array The orthogonal arrays has the following special properties that reduces the number of experiments to be conducted. • The vertical column under each independent variables of the above table has a special combination of level settings. All the level settings appears an equal number of times. For L9 array under variable 4 , level 1 , level 2 and level 3 appears thrice. This is called the balancing property of orthogonal arrays. • All the level values of independent variables are used for conducting the experiments. • The sequence of level values for conducting the experiments shall not be changed. This means one can not conduct experiment 1 with variable 1, level 2 setup and experiment 4 with variable 1 , level 1 setup. The reason for this is that the array of each factor columns are mutually orthogonal to any other column of level values. The inner product of vectors corresponding to weights is zero. If the above 3 levels are normalized between -1 and 1, then the weighing factors for level 1, level 2 , level 3 are -1 , 0 , 1 respectively. Hence the inner product of weighing factors of independent variable 1 and independent variable 3 would be (-1 * -1+-1*0+-1*1)+(0*0+0*1+0*-1)+(1*0+1*1+1*-1)=0 Minimum number of experiments to be conducted The design of experiments using the orthogonal array is, in most cases, efficient when compared to many other statistical designs. The minimum number of experiments that are required to conduct the Taguchi method can be calculated based on the degrees of freedom approach. For example, in case of 8 independent variables study having 1 independent variable with 2 levels and remaining 7 independent variables with 3 levels ( L18 orthogonal array) , the minimum number of experiments required based on the above equation is 16. Because of the balancing property of the orthogonal arrays, the total number of experiments shall be multiple of 2 and 3. Hence the number of experiments for the above case is 18. Assumptions of the Taguchi method The additive assumption implies that the individual or main effects of the independent variables on performance parameter are separable. Under this assumption, the effect of each factor can be linear, quadratic or of higher order, but the model assumes that there exists no cross product effects (interactions) among the individual factors. That means the effect of independent variable 1 on performance parameter does not depend on the different level settings of any other independent variables and vice versa. If at anytime, this assumption is violated, then the additivity of the main effects does not hold, and the variables interact. Designing an experiment The design of an experiment involves the following steps 1 Selection of independent variables
  • 12. 12 2 Selection of number of level settings for each independent variable 3 Selection of orthogonal array 4 Assigning the independent variables to each column 5 Conducting the experiments 6 Analyzing the data 7 Inference The details of the above steps are given below. Selection of the independent variables Before conducting the experiment, the knowledge of the product/process under investigation is of prime importance for identifying the factors likely to influence the outcome. In order to compile a comprehensive list of factors, the input to the experiment is generally obtained from all the people involved in the project. Deciding the number of levels Once the independent variables are decided, the number of levels for each variable is decided. The selection of number of levels depends on how the performance parameter is affected due to different level settings. If the performance parameter is a linear function of the independent variable, then the number of level setting shall be 2. However, if the independent variable is not linearly related, then one could go for 3, 4 or higher levels depending on whether the relationship is quadratic, cubic or higher order. In the absence of exact nature of relationship between the independent variable and the performance parameter, one could choose 2 level settings. After analyzing the experimental data, one can decide whether the assumption of level setting is right or not based on the percent contribution and the error calculations. Selection of an orthogonal array Before selecting the orthogonal array, the minimum number of experiments to be conducted shall be fixed based on the total number of degrees of freedom [5] present in the study. The minimum number of experiments that must be run to study the factors shall be more than the total degrees of freedom available. In counting the total degrees of freedom the investigator commits 1 degree of freedom to the overall mean of the response under study. The number of degrees of freedom associated with each factor under study equals one less than the number of levels available for that factor. Hence the total degrees of freedom without interaction effect is 1 + as already given by equation 2.1. For example, in case of 11 independent variables, each having 2 levels, the total degrees of freedom are 12. Hence the selected orthogonal array shall have at least 12 experiments. An L12 orthogonal satisfies this requirement. Once the minimum number of experiments is decided, the further selection of orthogonal array is based on the number of independent variables and number of factor levels for each independent variable. Assigning the independent variables to columns The order in which the independent variables are assigned to the vertical column is very essential. In case of mixed level variables and interaction between variables, the variables are to be assigned at right columns as stipulated by the orthogonal array [3]. Finally, before conducting the experiment, the actual level values of each design variable shall be decided. It shall be noted that the significance and the percent contribution of the independent variables changes depending on the level values assigned. It is the designers responsibility to set proper level values. Conducting the experiment Once the orthogonal array is selected, the experiments are conducted as per the level combinations. It is necessary that all the experiments be conducted. The interaction columns and dummy variable columns shall not be considered for conducting the experiment, but are needed while analyzing the data to understand the interaction effect. The performance parameter under study is noted down for each experiment to conduct the sensitivity analysis. Analysis of the data
  • 13. 13 Since each experiment is the combination of different factor levels, it is essential to segregate the individual effect of independent variables. This can be done by summing up the performance parameter values for the corresponding level settings. For example, in order to find out the main effect of level 1 setting of the independent variable 2, sum the performance parameter values of the experiments 1, 4 and 7. Similarly for level 2, sum the experimental results of 2, 5 and 7 and so on. Once the mean value of each level of a particular independent variable is calculated, the sum of square of deviation of each of the mean value from the grand mean value is calculated. This sum of square deviation of a particular variable indicates whether the performance parameter is sensitive to the change in level setting. If the sum of square deviation is close to zero or insignificant, one may conclude that the design variables are not influencing the performance of the process. In other words, by conducting the sensitivity analysis, and performing analysis of variance (ANOVA), one can decide which independent factor dominates over other and the percentage contribution of that particular independent variable. The details of analysis of variance are dealt. Inference From the above experimental analysis, it is clear that the higher the value of sum of square of an independent variable, the more it has influence on the performance parameter. One can also calculate the ratio of individual sum of square of a particular independent variable to the total sum of squares of all the variables. This ratio gives the percent contribution of the independent variable on the performance parameter. In addition to above, one could find the near optimal solution to the problem. This near optimum value may not be the global optimal solution. However, the solution can be used as an initial / starting value for the standard optimization technique. Table 4: Process parameters with their values at 3 levels Process Parameters Parameter Designation Levels L1 L2 L3 Speed (rpm) A 139 212 318 Feed (mm/rev) B 0.114 0.091 0.078 Depth of Cut (mm) C 0.317 0.635 0.952 The experimental layout was developed based on Taguchi’s Orthogonal Array Experimentation Technique. An L9 Orthogonal Array Experimental layout was selected to satisfy the minimum number of experiment conditions for the factors and levels presented in Table 5. Table 5: Factors, Levels and Degrees of Freedom Factor Code Factor No of Levels Degree of Freedom A Speed 3 2 B Feed 3 2 C Depth of Cut 3 2 Total degrees of freedom 6 Minimum number of Experiments 7 Table 6: Standard L9 Orthogonal Array Design of experiments: Trial No. Speed (rpm) Feed (mm/rev) DOC (mm) 1 1 1 1 2 1 2 2 3 1 3 3 4 2 1 2 5 2 2 3 6 2 3 1 7 3 1 3 8 3 2 1 9 3 3 2 Table 7: Standard L9 Array with Observation
  • 14. 14 SURFACE ROUGHNESS ANALYSIS Surface roughness often shortened to roughness, is a component of surface texture. It is quantified by the deviations in the direction of the normal vector of a real surface from its ideal form. If these deviations are large, the surface is rough; if they are small, the surface is smooth. In surface metrology, roughness is typically considered to be the high frequency, short-wavelength component of a measured surface. However, in practice it is often necessary to know both the amplitude and frequency to ensure that a surface is fit for a purpose. Roughness plays an important role in determining how a real object will interact with its environment. In tribology, rough surfaces usually wear more quickly and have higher friction coefficients than smooth surfaces. Roughness is often a good predictor of the performance of a mechanical component, since irregularities on the surface may form nucleation sites for cracks or corrosion. On the other hand, roughness may promote adhesion. Generally speaking, rather than scale specific descriptors, cross-scale descriptors such as surface fractality provide more meaningful predictions of mechanical interactions at surfaces including contact stiffness and static friction. A portable surface roughness tester is used after every trial to get the surface roughness value in µm. The data observed for surface roughness for the 9 experiments are given below: Trial No. Speed (rpm) Feed (mm/rev) DOC (mm) SR (µm) 1 139 0.114 0.317 0.74 2 139 0.091 0.635 1.52 3 139 0.078 0.952 0.37 4 212 0.114 0.635 4.32 5 212 0.091 0.952 5.68 6 212 0.078 0.317 1.53 7 318 0.114 0.952 3.72 8 318 0.091 0.317 4.92 9 318 0.078 0.635 6.53 Regression analysis generates an equation to describe the statistical relationship between one or more predictor variables and the response variable. After we use Minitab Statistical Software to fit a regression model, and verify the fit by checking the residual plots, we will want to interpret the results. The p-value for each term tests the null hypothesis that the coefficient is equal to zero (no effect). A low p-value (< 0.05) indicates that we can reject the null hypothesis. In other words, a predictor that has a low p-value is likely to be a meaningful addition to our model because changes in the predictor's value are related to changes in the response variable. Conversely, a larger (insignificant) p-value suggests that changes in the predictor are not associated with changes in the response. Using Minitab Software the Regression Model has been developed for the above Experiment. The regression equation is Table 8: General Linear Model for Surface Roughness Ra Factor Type Levels Values Speed Fixed 3 139, 212, 318 Feed Fixed 3 0.114,0.091,0.078 DOC Fixed 3 0.317,0.635,0.952 Ra (µm) = -8.41 + 0.029*Speed (rpm) +24.058*Feed (mm/rev) + 6.049*DOC (mm)
  • 15. 15 Material Removal Rate Analysis Material Removal Rate (MRR), otherwise known as Metal Removal Rate, is the measurement for how much material is removed from a part in a given period of time. The material removal rate can be calculated from the volume of material removal or from the weight difference before and after machining. It is an indication of how fast or slow the machining rate is and an important performance parameter in micro-EDM, as this is usually a very slow process. Higher machining productivity must also be achieved with a desired accuracy and surface finish. The MRR greatly depends on the process parameters. A higher value of discharging voltage, peak current, pulse duration, duty cycle, and lower values of pulse interval can result in higher MRR. In addition to these electrical parameters, other nonelectrical parameters and material properties have significant influence on MRR. The MRR for turning is calculated as : MRR = Speed * Feed * DOC The data observed for material removal rate for the 9 experiments are given below: Trial No. Speed (rpm) Feed (mm/rev) DOC (mm) MRR (mm3 /min) 1 139 0.114 0.317 16.12 2 139 0.091 0.635 20.03 3 139 0.078 0.952 25.61 4 212 0.114 0.635 18.98 5 212 0.091 0.952 23.31 6 212 0.078 0.317 10.96 7 318 0.114 0.952 20.63 8 318 0.091 0.317 8.04 9 318 0.078 0.635 12.24 Using Minitab Software the Regression Model has been developed for the above Experiment. The regression equation is Table 9: General Linear Model for MRR Factor Type Levels Values Speed Fixed 3 139, 212, 318 Feed Fixed 3 0.114,0.091,0.078 DOC Fixed 3 0.317,0.635,0.952 MRR = 7.67 – 0.036*Speed (rpm) + 70 *Feed (mm/rev) + 17.26*DOC (mm)
  • 16. 16 Conclusion Following are the conclusions drawn based on the test conducted on the cylindrical mild steel block during turning operation in a lathe machine. 1. From the results obtained a Regression Model has been developed for Surface Roughness and Material Removal Rate. From this equation we can predict the value of Surface Roughness and MRR if the values of Cutting Speed, Feed and Depth of Cut are known. 2. The validation experiment confirmed that the error occurred was less than 2.0 % between equation and actual value. 3. The optimal settings of process parameters for optimal Surface Roughness and MRR are: Speed (139 rpm), Feed (0.078 mm/rev), and DOC (0.952 mm) The SR and MRR for these conditions are found to be 0.37 µm and 25.61 mm3 /min respectively. This research gives us how to use Taguchi’s parameter design to obtain optimum condition with lowest cost, minimum number of experiments and Industrial Engineers can use this method. The research can be extended for other materials using Tool Nose Radius, Lubricant, and Material Hardness etc as parameters. REFERENCES 1 Kansal, H. K., Singh, S., and Kumar, P. Effect of silicon powder mixed EDM on machining rate of AISI D2 die steel. J. Manuf. Process., 2007, 9, 13–21. 2 Kumar, S., Singh, R., Singh, T. P., and Sethi, B. L. Comparison of material transfer in electrical discharge machining of AISI H13 die steel. Proc. Inst. Mech. Engrs, Part C: J. Mech. Engng Sci., 2009, 223(7), 1733–1740. 3 Uno, Y., Okada, A., and Cetin, S. Surface modification of EDMed surface with powder mixed fluid. Proc. 2nd Int. Conf. on Design and Production of Dies and Molds, 2001. 4 Pecas, P. and Henriques, E. Effect of powder concentration and dielectric flow in the surface morphology in electrical discharge machining with powder mixed dielectric (PMD-EDM). Int. J. Adv. Manuf. Technol., 2008, 37, 1120–1132. 5 Pecas, P. and Henriques, E. Influence of silicon powder-mixed dielectric on conventional electrical discharge machining. Int. J. Machine Tools Manuf., 2003, 43, 1465–1471. 6 Wong, Y. S., Lim, L. C., Rahuman, I., and Tee, W. M. Near-mirror-finish phenomena in EDM using powder- mixed dielectric. J. Mater. Process. Technol., 1998, 79, 30–40. 7 Wu, K. L., Yan, B. H., Huang, F. Y., and Chen, S. C. Improvement of surface finish on SKD steel using electro- discharge machining with aluminum and surfactant added dielectric. Int. J. Machine Tools Manuf., 2005, 45, 1195–1201. 8 Jeswani, M. L. Effect of the addition of graphite powder to kerosene used as a dielectric fluid in electrical discharge machining. Int. J. Mater. Process. Technol., 1981, 70, 133–139. 9 Furutani, K., Sanetoa, A., Takezawaa, M. N., and Miyakeb, H. Accretion of titanium carbide by electrical discharge machining with powder suspended in working fluid. J. Int. Soc. Precision Engng Nanotech., 2001, 25, 138–144. 10 C¸og˘un, C., O¨ zerkan, B., and KaraC¸ ay, T. An experimental investigation on the effect of powder mixed dielectric on machining performance in electric discharge machining. Proc. Inst. Mech. Engrs, Part B: J. Engng Manuf., 2006, 220(7), 1035–1050. 11 Kozak, J., Rozenek, M., and Dabrowski, L. Study of electrical discharge machining using powder suspended working media. Proc. Inst. Mech. Engrs, Part B: J. Engng Manuf., 2003, 217(11), 1597–1602.