PII: DOI:

Reference:

Accepted Manuscript

□-constraint heat transfer search (D-HTS) algorithm for solving multi-objective engineering design problems

Mohamed A. Tawhid, Vimal Savsani

To appear in:

Journal of Computational Design and Engineering

S2288-4300(17)30026-X http://dx.doi.Org/10.1016/j.jcde.2017.06.003 JCDE 98

Received Date: Revised Date: Accepted Date:

2 March 2017 5 June 2017 21 June 2017

Please cite this article as: M.A. Tawhid, V. Savsani, D-constraint heat transfer search (D-HTS) algorithm for solving multi-objective engineering design problems, Journal of Computational Design and Engineering (2017), doi: http:// dx.doi.org/10.1016/jjcde.2017.06.003

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

e-constraint heat transfer search (e-HTS) algorithm for solving multi-objective engineering design problems

12 3 4

Mohamed A. Tawhid ' and Vimal Savsani'

department of Mathematics and Statistics, Faculty of Science, Thompson Rivers University, Kamloops, BC, Canada V2C 0C8 (Email: mtawhid@tru.ca)

Univer

2Department of Mathematics and Computer Science, Faculty of Science, Alexandria University,

Moharam Bey 21511, Alexandria, Egypt

3Department of Mechanical Engineering, Pandit Deendayal Petroleum University, Gandinagar,

Gujarat, India

4Postdoctoral Fellow, Department of Mathematics and Statistics, Faculty of Science, Thompson Rivers University, Kamloops, BC, Canada V2C 0C8 (Email: vsavsani@tru.ca,

vimal.savsani@gmail.com)

Abstract:

transfer s transfer s

In this paper, an effective e-constraint heat transfer search (e-HTS) algorithm for the multi-objective engineering design problems is presented. This algorithm is developed to solve multi-objective optimization problems by evaluating a set of single objective sub-problems. The effectiveness of the proposed algorithm is checked by implementing it on multi-objective benchmark problems that have various characteristics of Pareto front such as discrete, convex, and non-convex. This algorithm is also tested for several distinctive multi-objective engineering design problems, such as four bar truss problem, gear train problem, multi-plate disc brake design, speed reducer problem, welded beam design, and spring design problem. Moreover, the numerical experimentation shows that the proposed algorithm generates the solution to represent true Pareto front.

Keywords: Multi-objective optimization, heat transfer search, design optimization, Pareto front

1. Introduction:

The objective of this paper is to solve optimization problems which have more than one objective functions. These optimization problems are recognized as multi-objective optimization problems (MOOPs). Solving MOOPs is very strenuous and difficult task because their multiple objective functions are often conflicting. MOOPs are significant tools in solving many different engineering disciplines, finance, management science, economics, and many other disciplines (Goicoechea et al. 1982; Andersson, 2000; Coello et al., 2005; Marler &Arora, 2004; Li and Zhang, 2009). Several applications of MOOPs exist in engineering that help the decision maker or the design engineer to select an appropriate system or design from the available set of solutions obtained as a Pareto front. It is not possible to list all the applications of MOOPs in this work, but many such latest applications (Ahmadi et al., 2015; Sadatsakkak e al., 2015; Ahmadi et al., 2016a; Ahmadi et al., 2016b; Qu et al., 2016; Ahmadi et al., 2016c ; Cavaliere et al., 2016; Gadhavi et al., 2016; Bandaru et al., 2017; Zhou et al., 2011; Luna et al., 2010; Mondal et al., 2013; Moslehi & Mahnam, 2011) are available in the literature to highlight the importance of MOOPs in engineering regime.

Although, single objective optimization problems (SOOPs) have a single objective to be optimized at a given time and the solution of which is a single optimal point normally, the optimal solution in MOOPS that concurrently optimizes all the objective functions does not exist.

Thus, the decision makers for solving MOOPs are acquisitive in taking the ''most preferred" solution in contrast to the optimal solution and find the set of best trade-offs between the objective functions. The Pareto optimal (or non-inferior, non-dominated, efficient) solutions are the solutions that cannot be enhanced in one objective function without worsening their performance in at least one of the rest. The set of the Pareto optimal solutions is the Pareto set. Most of all the methods for solving MOOPs have aimed to find the Pareto front solutions as near as possible to the optimal Pareto front and maintain variation among the optimal set of solutions.

Weighted sum method is the simplest popular method that solves multi-objective optimization problem by solving different single objective sub-problems (Ehrgott and Gandibleux, 2002). These sub-problems are generated by the linear combination of the objectives. Various combinations of the weights generate various non-dominated solutions. The main drawback of this method is that it cannot find the solution for the non-convex objective space. Also, it is very difficult to solve a problem in which the value of both the objectives is significantly different.

Several researchers have overcome the disadvantages of the weighted sum method and solve problems with non-convex objective space by e-constraint method (Khalili and Amiri, 2012; Aghaei et al., 2011). This method also formulates the multi-objective problem in to different single objective sub-problems by converting all the objectives in the form of constraints except one. The objectives which are covered in to constraints are restricted by the different values of e,

and so this method is known is e-constraint method. Different values of e produces different non-dominated solutions in the objective space.

Recently, many researchers have developed and proposed successful and powerful algorithms in finding the global optimum of complex problems of MOOPs, called evolutionary algorithms (EAs). Such algorithms to solve MOOPs are known as multi-objective evolutionary algorithms (MOEAs) when they use EAs as their basic search techniques. Many such methods which works on the strategy of EAs are; vector evaluated genetic algorithms (VEGA) (Schaffer, 1985; Zitzler and. Thiele, 1999), multi-objective genetic algorithm (MOGA) (Knowles and Corne, 1999), non-dominated sorting genetic algorithm (NSGA) (Srinivas and Deb, 1994), niched Pareto genetic algorithm (NPGA) (Corne et al., 2001), strength Pareto evolutionary algorithm (SPEA2 and SPEA) (Zhou et al., 2011; Zitzler and Thiele, 1999 ; Zitzler et al., 2001), Pareto archived evolution strategy (PAES) (Knowles and Corne, 1999), Pareto envelope-based selection algorithm (PESA and PESA-II) (Corne et al., 2001), an elitist based non-dominated sorting genetic algorithm - II (NSGA-II) (Deb et al., 2002), a multi-objective evolutionary algorithm based on decomposition (MOEA/D) (Zhang and Li, 2007) and its improved versions described in detail in (Zhou et al., 2011). Apart from EAs, swarm intelligence based multi-objective optimization were developed and have been employed and addressed to a variety of MOOPs (Coello and Lechuga, 2002; Mostaghim and Teich, 2004; Agrawal et al., 2008). Several other multi-objective algorithms were developed that utilizes the search techniques of particle swarm optimization (PSO) (Reyes and Coello, 2006; Moslehi and Manham, 2011; Wang and Yang, 2009), artificial bee colony (ABC) (Omkar et al., 2011; Akbari et al., 2012; Zhang et al., 2012), artificial immune system algorithm (AISA) (Tan et al., 2008), ant colony optimization (ACO) (Angus, 2009; Yagmahan and Yenisey, 2008), artificial immune algorithm (AIA) (Aydin, 2011; Gong et al., 2008), gravitational search algorithm (GSA) (Mondal et al., 2013), biogeography-based optimization (BBO) (Jamuna and Swarup, 2012; Roy et al., 2010), invasive weed optimization (IWA) (Nikoofard et al., 2012), firefly algorithm (FFA) (Yang, 2013), cuckoo search algorithm (CSA) (Yang and Deb, 2013), bat algorithm (BA) (Yang, 2011) and teaching-learning based optimization (TLBO) (Krishnanand et al., 2011; Patel and Savsani; 2014a, 2014b). All such methods have shown successful evidence in getting the Pareto front closest to the true Pareto front.

HTS is an effective meta-heuristic developed in 2015 (Patel and Savsani, 2015). This method has proved its effectiveness and robustness in terms of accuracy, computational efforts and convergence for the single objective optimization. HTS works on the natural phenomena of heat transfer by conduction, convection and radiation to maintain the equilibrium with the surrounding. This paper is mainly focused on to develop a novel technique by using e-constraint multi-objective optimization method along with a recently developed meta-heuristics called heat transfer search (HTS). This proposed method will be denoted as e — HTS method. Later in this paper it is shown that the e — HTS can generate non-dominated solutions and true Pareto front(PF). The first contribution of this paper is to introduce a new concept of e-constraint

method with the HTS algorithm. Second contribution is to show the correctness of e — HTS method for the engineering design problems. The proposed method is investigated on different multi-objective benchmark functions and engineering design problems. The proposed approach is compared with the other well known multi-objective optimization algorithms.

The organization of this paper is as follows: Section 2 defines multi-objective optimization and Pareto concepts. Section 3 describes the heat transfer search (HTS) algorithm in details. Section 4 is focused to cover the basic of e — HTS algorithm. Section 5 investigates the performance of e — HTS on different benchmark problems followed by section 6 which investigates the performance of e — HTS or different engineering design problems. Section 7 gives the general conclusion.

2. Multi-objective optimization problems:

A general multi-objective problem consists of several objective functions and is associated to several constraints. Mathematically, the problem can be formulated as (Berube et al., 2009, Laumanns et al., 2006; Miettinen, 2012):

minf(x) = [fi(x), f2(x)...fn(x)], x = [x1,x2,...xk] ES

Subjected to • gi(x) < 0,i = 1,2,... I

hj(x) = 0,j = 1,2,. J

(x)1 <x < (x)u

where, S is the set of feasible solutions referred as solution space. f1(x), f2(x) ...fn(x) are individual objective functions, gi(x) are inequality constraints and hj(x) are equality constraints, (x)land (x)u are bound constraints indicating lower and upper limits for the design variables.

The objective space is defined by T = {f = (f1,f2,... fn):fi = fi(x),Vx E S,i = 1,2, ...n}.

Since there is no solution that optimizes at the same times all objectives, one will search for a suitable trade-off instead of an optimal solution. Although some solutions might be considered as equivalent, this trade-off must be such that no strictly better solution exists, though. This involves a partial order of the objective space, defined by a 'dominance relation' which is used to characterize 'Pareto optimality or efficiency'. Let f1 and f2 E T. f1 dominates f2 (denoted as f1 > f2), if and only if fi1 < fi2, i = 1,2,... n , where at least one inequality is strict.

x E S is Pareto optimal (non-dominated) in S, if and only if fix' E S such that f(x') > f(x). The set of all Pareto-optimal design variables is called Pareto set. The Pareto set PS = {x E S: x is Pareto optimal in S}.

PF = {f(x):x G PS} is denoted to Pareto front.

It can be noted from that the PS is defined on the solution space whereas PF is defined on the objective space.

3. Heat transfer search (HTS) algorithm:

Patel and Savsani (2015) proposed the HTS algorithm which is based on the natural law of thermodynamics and the principle of thermal equilibrium of the system. Thus, thermodynamically imbalanced systems always try to attain thermal equilibrium by initiating heat transfer between the system and its surrounding. The modes of heat transfer (Viz. conduction, convection, and radiation) play an essential role to set the thermal equilibrium. Therefore, the HTS algorithm considers 'the conduction phase', 'the convection phase', and 'the radiation phase' to reach an equilibrium state. In the HTS algorithm, all three modes of heat transfer have equal probability to transfer heat and one of the heat transfer mode is decided randomly for each generation during the course of optimization.

The HTS algorithm is a population-based algorithm that initiates with a randomly generated population, where the system has 'n' number of molecules (i.e. population size) and the temperature level (i.e. design variables) is 'm'. In the next stage, the population is updated in each generation 'g' by one of the randomly selected heat transfer mode. Furthermore, the updated solution in the HTS algorithm is accepted only if it has a better functional value. Afterwards, the worst solutions of the population are replaced by the elite solutions and finally identical solutions are replaced by a randomly generated solution if it exists. Thus, the better solution can be obtained by performing the difference between the current solution and either of the best solutions, another random solution, or the mean value of solutions from the population. The detailed of all three phases of the HTS algorithm is described as follows.

In the conduction phase, heat transfer happens because of the conduction between molecules of the substance. Thus, more energetic molecules transfer heat to less energetic molecules to get a state of the thermal equilibrium. Conduction can also take place between the system and the surrounding when both are in direct physical contact with each other. Conduction heat transfer is governed by the Fourier's law of heat conduction. The mathematical formulation of the updated molecule is given in Equation 1 and 2.

T, _(Tk,i + (—R2*TKi),ifF(Tj)>F(Tk) <

lj-i 1 Tji + (—R2 * Tj i), if F(Tj) < F(Tk) 'if9 - g™/LUF

T,=(Tkti + (-ri*Tkti),ifF(TJ)>F(Tk).f -Ti [ Tji + (-r * Tji), if F(Tj) < F(Tk) ,lf 9 - dmax/

Where, Tj^ is the updated molecule; j _ 1,2,...,n;k is a randomly selected molecule; j ^ k; k e (1,2, ...,n); i is a randomly selected design variable; i E (1,2,...,m); FE is function

evaluation; CDF is the conduction factor; R is the probability variable; R E [0,0.3333]; rt is a random number; ri E [0,1]; R2and rt represents the conductance parameters of the Fourier's equation; Tj and Tk represents the temperature difference of molecules; and CDF is set to 2, it balances the exploration and exploitation capabilities of the conduction phase. In this phase, only one design variable is updated in each generation of the course of optimization.

In the convection phase, heat transfer happens because of convection between the system and the adjacent fluid in motion. Thus, the system temperature (the mean temperature) interacts with the adjacent fluid temperature (the surrounding) to reach a state of the thermal equilibrium. The best solution is considered as the surrounding. Convection heat transfer is stated by the Newton's law of cooling. This phase is presented in Equations 3 and 4.

Tj'i = Tji +R*(TS — Tn

= ( abs(R — r), ifg < gmax/COF \round(1 + ri), ifg > gmax/COF

Where, Tj'i is the updated population; j = 1,2,...,n;i = 1,2,...,m; FE is function evaluation; COF is the convection factor; R is the probability variable; R E [0.6666,1]; r is a random number; ri E [0,1]; R and r represent the convection parameters of Newton's law of cooling; TS and Tms represents the temperature difference of surrounding temperature and the mean temperature of the system, respectively; TCF is a temperature change factor, it balances the exploration and exploitation capabilities of the convection phase; and COF is set to 10. In this phase, all the design variables are updated in each generation of the course of optimization.

In the radiation phase, heat transfer happens because of radiation emitted in the form of electromagnetic waves (or photons) because of its temperature level. Thus, the system interacts with the surrounding temperature (i.e. best solution) or within the system (i.e. other solution) to get a state of the thermal equilibrium. All bodies at a temperature above absolute zero emanate radiations. The maximum rate of radiation heat transfer determined by the absolute temperature level and described by the Stefan-Boltzmann law. The updated population is given in Equations 5 and 6.

Tji + R * (Tki — Tji), if F(Tj) > F(Tk)

T■'■ =

ji + R * (Tji—Tki), if F(Tj) < F(Tk)

Tji + ri * (TKi — Tji), if F(Tj) > F(Tk) . Tji + ri * (Tji—Tki), if F(Tj) < F(Tk)

;if g < gmaX/RDF ;if g > gmax/RDF

Where, Tj'i is the updated population; j = 1,2,...,n;i = 1,2,...,m;j ^k;k E (1,2,...,n); g is the current generation and gmax is the maximum number of generation specified; k is randomly selected molecules; R is the probability variable; R E [0.3333, 0.6666]; R and rt represent the

radiation parameters of the Stefan-Boltzmann law; r is a random number; ri e [0,1]; TD and Tk represents temperature of a system and the surrounding, respectively; RDF is the radiation factor and RDF is set to 2, it balances the exploration and exploitation capabilities of the radiation phase. In this phase, all the design variables are updated in each generation of the course of optimization.

The HTS algorithm runs in three phases such as 'the conduction phase', 'the convection phase', and 'the radiation phase'. Besides, each phase is divided into two sub-phases, which is controlled by function evaluations and various factors of heat transfer phases. Thus, the value of design variables can move small or large as all three modes of heat transfer have equal probability to transfer heat. The large and small change of the design variables represents the exploration and exploitation of a search space, respectively. The procedure for the basic HTS is given in Algorithm-1.

Algorithm 1: basic Heat Transfer Search (HTS)

Initialize population size (n), Number of design variables (m), limits on design variables (L, U), stopping criteria (FEmax, gmax) CDF, COF, RDF /* initialization /*

Tj:i = Lj}i + rand * (_Uj}i-Lj^),forVj £ [1,n],forVi £ [1,m]K^P /* initialize population /* FE=0; '

Evaluate the population and arrange the population in ascending order ^ FE = n While (g < gmax and FE < FEmax ) /* begin the optimization loop /*

R=rand £ [0,1] /* R decides the probability for the selection of phases/*

forj = 1 : n

if R < 0.3333 /* update the population in the conduction phase /*

k £ [1, n] ± j /* select any random solution/*

i £ [1, m] /*Select any random design variable/*

if F(Tj)>F(Tk)

1 < dma.

if 9 < dmaJCDF ^ t;à = TkX + (-R2 * TKi) else ^ Tji = Tki + (-r * Tu) Evaluate f(T-) ^ FE = FE + 1

f{T!) < f(Tj) ^ Tj = Tj /* greedy selection/*

if 9 < 9maJCDF ^ Tki = Tji + (-R2 * Tji) else ^ Tk,i = Tji + (-r * Tji) Evaluate f (Tk) ^ FE = FE + 1

f(Tk) < f(Tk) ^ Tk = Tk /* greedy selection/*

end if

else if 0.3333 < R && R < 0.6666 /* update the population in the radiation phase /*

k e [1, n] ± j /*Select any random population/*

if F(Tj) > F(Tk)

if 9 < 9maX/RDF ^ Vi : Tji = Tji + R * (Tki - Tji) else ^ Vi : Tji = Tji + r * (TKi - Tji)

if 9 < 9max/RDF ^ Vi : Tji = Tji + R * (TjA-Tkii) else ^ Vi : Tji = Tji + r * (Tj-T^d

Evaluate f(Tj) ^ FE = FE + 1

f(jj) < f(jj) ^ Tj = Tj /* greedy selection/*

else /* update the population in the convection phase /*

if 9 < 9max/C0F ^ Vi : Tj[i = Tj,i + R * (Tj ! — Tj,mean * abs(R — r^

else ^ Vi : Tji = Tji +R* (TjA — Tj^ean * roundel + r^) Evaluate f(Tj) ^ FE = FE + 1

f(Tj) < f(Tj) ~ Tj = Tj /* greedy selection/*

end if

end for

for j=1:2:n /*remove duplicate solution/*

If T = Tj+1 ^ Tj+U = Lt+ rand(Ui - Li), notV i ^ FE = FE + 1

endfor

endwhile

4. e — constraint Heat transfer search method (e — HTS) for multi-objective optimization:

The e — constraint method solves the multi-objective optimization problems by converting the multi-objective problem in the form of single objective sub-problems. It solves Ek{ef) e — constraint problems. The problem Ek{ef) can be stated as:

min fi (x)

Subjected to: fj(x) < ej, i ^ j, i,j = 1,2, ...n

Theorem 1. x' is a Pareto optimal solution if and only if x' solves Ek(ej)for Vk

Theorem 2. If x' solves Ek(ej)for Vk and x' is a unique solution, then x' is a Pareto optimal solution

The above theorems are proved for the general multi-objective optimization problems (Miettinen, 2012). These theorems indicate that Pareto optimal solutions can always be found by solving e — constraint problems, provided ej in Ek(ej) is in the feasible range of PF.

To properly apply the e — constraint method we must have the range of every objective function, at least for the n— 1 objective functions that will be used as constraints. The calculation of the range of the objective functions over the efficient set is not a trivial task. While the best value is easily attainable as the optimal of the individual optimization, the worst value over the efficient set (Nadir value) is not. The most common approach is to calculate these ranges from the payoff table (the table with the results from the individual optimization of the n objective functions). The nadir value is usually approximated with the minimum of the corresponding column (Berube et al., 2009). However, even in this case, we must be sure that the obtained solutions from the individual optimization of the objective functions are indeed Pareto optimal solutions.

Definition 5: An objective vector fNP = (fiNP), i = 1,2, ...n constructed using the worst value of the objective functions in the complete Pareto front (PF) is called a Naider point

Let fNP = (f?P) be the Nadir point

Where, ft = max ft (x), x E PS

Though it looks simple to estimate the Naider point, but it is not that straight forward. To find the Naider point it is required to satisfy the condition that the design variable vector must belong to the Pareto set. The estimation of Naider point from the payoff table can be easily understood from the following example. Consider a multi-objective optimization problem to minimize two functions f and f2. We will first minimize f which gives the design variable vector as x1 and let us suppose the minimized value of f1 = 25. For x1 calculate the value of f2, which is say f2 = 100 and say for reverse case (i.e minimizing f2) the values are f2 = 15 and f1 = 90. The payoff table can be constructed as given in Table-1. So, Naider point is fNP = (f1NP ,f2NP) = (90,100). The concept can be visualized from the Figure-1, where all the points, Pareto front and Naider point is indicated clearly.

The e — HTS is explained in Algorithm-2 as fol

llows:

Algorithm 2: e objectives

HTS algorithm for multi-objective optimization considering two

ons k req

Decide the number of Pareto solutions k required in the Pareto front(PF) Minimize each objective function and find corresponding design vector

Find x1 ^ minf1 = f° and x2 ^ minf2 = f° Calculate Nadir point (f1Np,f2NP) ^ PF1 = (f10,f2NP)

Set e¡ = e1=f°+S-.ô =

(f?p-f?) k-1

While ej > f1NP : Solve Ek(ej) through HTS ^ PF' = (f°' ,f°j)

Set j = j + 1 and ej = f1 + 5 if ^dominated solutions ^ Remove

If the problem, Ek(ej), is multimodal, then there are chances to get some dominated solutions due to the presence of many local optima. It is noted from the algorithm provided above that the proposed method is algorithmically simple and do not require special knowledge which is

ristics al

required or the other multiOobjective versions like NSGA, SPEA, PAES etc. If the designer knows how to optimize a single objective problem, it is enough to extend it to develop its multi-objective version by following above Algorithm 2. The choice of effective meta-heuristics overcomes this problem to a great extent, but due to the stochastic nature of different meta-heuristics the global solutions cannot be guaranteed with full confidence. All the meta-heuris can find global or near global solutions with much higher accuracy compared to the classics techniques, which can only find the local solutions and are highly sensitive to the initial solutions. One of such effective meta-heuristics, heat transfer search (HTS) is considered to find the global solutions to Ek(ej). To build the confidence for the global solutions, it is a good practice to obtain the results for different independent runs with different initial populations. Approximate Pareto front can be obtained by reducing the number of function evaluations than that required to generate true Pareto-front. This can only be achieved if the meta-heuristics possesses good convergence rate. As discussed in (Patel and Savsani, 2005), HTS possesses good convergence rate compared to other state-of -the art algorithms.

5. Numerical Examples:

There exist many different test functions for multi-objective optimization (Zhang et al., 2009; Zitzler and Thiele, 1999; Zitzler et al., 2000), but a subset of a few extensively used functions provides a wide range of diverse properties in terms Pareto optimal set and Pareto front. To check the proposed algorithm, we have chosen a subset of these functions with convex, non-convex and discontinuous Pareto fronts. We also include functions with more complex Pareto sets. In this paper, we have tested the following five functions:

• Schaffer's Min-Min (SCH) (Schaffer,1985)

ft(x) = x2, f2(x) = (x — 2)2,—103 <x<103

• ZDT1 function (Zitzler, 2000)

unction (

f1(x) = x1, f2(x)=9ll — jf^j

Where, 9 = 1 +

911=2 Xi

d — 1

0 < xi < 1,i = 1,2,.30 ZDT2 function (Zitzler, 2000)

, d = dimension of the problem

f1(x) = x1, f2(x)=9(1 — (9)^

LZ function (Li and Zhang, 2009)

ZDT3 function (Zitzler, 2000)

ft(x) = xt,f2(x) = 9(1— — f-sin(10nf1) j

(. ^ { '"«2 ^

f1(x = x1+JJ1\^ {xj—sin {6"x1+d))

f2(x) = 1 — Jxi+Y~\j^ {xj — sin {6nxi +Jd))

Where, J1 = [J\J is odd},J2 = [J\J is even}, 2 < J < d.

Numerical experiments are carried out for different benchmark multi-objective problems. The benchmark multi-objective functions considered in this work are SCH, ZDT1, ZDT2, ZDT3, and LZ which possesses different characteristics. SCH is a one-dimensional problem with convex Pareto front. ZDT1 have convex Pareto front, whereas ZDT2 have non-convex Pareto front. The Pareto front of ZDT3 have discontinuous Pareto front. The value of g in ZDT2 and ZDT3 are same. The Pareto front for ZDT1, ZDT2 and ZDT3 occurs when value of g reaches 1. The objective functions in LZ function are multi-modal and it adds additional challenge for the algorithm to find the true Pareto front. The function LZ possesses Pareto front when f2 = 1 —

. All the problems taken for the study are unconstrained multi-objective functions, but as e — HTS works by converting (n — 1) objectives into constraints, the considered problems are indirectly considered as constrained multi-objective problems. There are many methods to handle constrained problems. In this work, all the problems use static penalty method (Rao, 2009) approach. A penalty value is added to the objective function for each infeasible solution so that it will be penalized for violating the constraints. This method is popular because it is simple to apply. It requires amount of penalty to be added and varies for different problems. The constrained optimization problem can be converted in to unconstrained optimization problem using static penalty approach as follows:

fj(X) = f (X) + 1i=1 Pi max[9i(X), 0} + Pi max { \hi(X)\ — S, 0} Where,

fj(X),J = 1,2 ...n, are the obJective function to be optimized (here minimized)

X = {x1, x2,... xm}, are desi9n variables

9i(X) < 0,i = 1,2 ...p are inquality constraints

hi(X) = 0,i = p + 1 ...NC are equality constraints

p is the total number of nonlinear constraint, (NC - p) is the total number of equality constraints and NC is total number of constraints. Pi is a penalty factor which is generally assigned a large number and 8 is a tolerance on the equality constrained to consider it as a feasible. Here it can be noted that for e — HTS additional (n — 1) inequality constraints are added. So, even for unconstrained problem at least (n — 1) inequality constraint exists. Hence, e — HTS meth handle constrained multi-objective optimization problems as well.

The performance of the algorithm is calculated by following criteria (Deb, 2001):

• Generational Distance (GD):

GD is calculated between the true Pareto front (PFf) and the obtained Pareto front (PF0). GD indicates the average distance between the obtained Pareto front and the true Pareto front. GD is defined as:

Where, d, =

Where, k is the number of Pareto Solutions, n is the number of objective functions, PF°p indicate the pth obtained Pareto solution for the ith objective and PF?p indicate the nearest point on true Pareto front from PF°p

• Spacing (S):

Spacing is defined as:

Where, D, is the absolute difference between the two consecutinve solutions in the obtained Pareto front (PF0). It is defined as:

Dp = 'Lirr\PF°(p-i) — PF°p\

D is the average of all Dp.

Spacing(S) specify the spread of the obtained Pareto front. It gives the standard deviation of Dg. The small value of S indicates the uniform spacing of the obtained Pareto Solutions.

Spread (A): Spread is defined as:

A Yi1j=ld^X + Yrp=2\dp d\

Z]=1dfx + (k — 1)d

Where, dp is the Euclidian distance between two consecutive points of the obtained Pareto front and djX is the Euclidian distance between the obtained Pareto front and the True Pareto front. d is the average of dp. Spread checks the condition for the obtained Pareto front to cover the True Pareto front. Smaller value of A indicate better spread for the obtained Pareto front with uniform distribution.

The e — HTS requires specific parameters for its execution, which includes CDF, COF and RDF. These parameters are considered as given in Patel and Savsani (2015), i.e CDF=RDF=2 and COF=10 as these parameter setting has performed better on different benchmark functions. For the experimentation, the population size is set as 20 and maximum function evaluations as 10000 for ZDT1, ZDT2, ZDT3 and SCH functions. For LZ function the population size is set as 50 and function evaluations as 300000, to maintain the uniformity of experimental setup. For all the problems number of Pareto Solutions(k) are considered as 200.

The Pareto front for SCH, ZDT1, ZDT2, ZDT3 and LZ are show in Figure 2 to 6 respectively. All the figures show the True Pareto front and the obtained Pareto front. It can be noted form the results that for all the functions e — HTS is capable to approximate the True Pareto front. The effectiveness of the results is checked by the above-mentioned performances measures. The results for the GD for all the functions are summarized in Table 2, where the results are compared with the state-of-the art multi-objective algorithms such as NSGA-2, SPEA, VEGA, MODE, DEMO, MO-Bees, MOFA and MOCS. The results except e — HTS are represented from Yang, 2013 and Yang and Deb, 2013. The results summarized in Table 2 are the average result obtained in 25 independent runs.

It is observed from the results that e - HTS has produced better GD compared to all the algorithms except for ZDT3 for which the results are only inferior to MOCS. It can be summarized that e - HTS can find true Pareto solutions for the problems with convex, non-convex and discrete Pareto front. The results are also compared based on the spread (A) with the other multi-objective optimization algorithms such as NSGA-II (real coded), NSGA-II(binary coded), SPEA and PAES. The result for the A are shown in Table 3. It can be noted form the results that e — HTS has shown better A for ZDT1 compared to all the algorithms. For, ZDT2, e — HTS is better compared to SPEA and PAES, but slightly inferior to NSGA-II. For ZDT3 value of A for e - HTS is inferior to all the algorithms.

6. Engineering design experiments:

In this section e — HTS is experimented on 6 different multi-objective engineering design problems which are widely used in the research to investigate the multi-objective optimization algorithms. All the considered problems possess different characteristics. The algorithms parameters for the e — HTS is same as that for the benchmark problems considered in the previous section. All the engineering problems are experimented with a population size of 20 and 20000 function evaluations. For all the problems 200 Pareto solutions are generated and the results mentioned in this work are the average result obtained in the 30 runs to maintain the same experimental set up with (Sadollah et al., 2015).

Four bar truss problem:

o maintain

The aim of this problem is to minimize volume and displacement of joints simultaneously. Area of each link is considered as a design variables. This problem is an unconstrained problem with all the continuous design variables. The system is shown in Figure 7.

The problem can be stated as below Minimize: f1(x) = L(2x1 + J2x2 + + x4)

Minimize: f2(x) = {fL)(2 + 2 — — 2 — +

\ EJ\X2 X2 X3 Xa)

Where,

F = 10, E = 2e5, L = 200 1 < xv x4 < 3,^2 < x2,x3 < 3

This problem has a convex Pareto front in the objective space. The Naider point for this problem is obtained as (1727.739, 0.03299) and the extreme points of the Pareto front obtained by using e — HTS for this problem are (1180, 0.032994) and (1727.7394, 0.0027614). Mathematical expression for the exact Pareto front is not available for this problem in the literature, so the performance is checked based on the value of spacing (S) due to its availability in the literature for different algorithms. The value of S for e — HTS

different algorithms such as NSGA-II, MOPSO, Micro-GA, PAES and MOWCA (Sadollah et al., 2015) are given in Table 4. It can be observed from the results that e — HTS has produced minimum value of spacing (S) with reasonable standard deviation (SD) among the solutions compared to the other algorithms. Hence it can be interpreted that e — HTS has produced better spacing among other comparative algorithms. The better spacing indicates that the Pareto solutions are uniformly distributed along the Pareto front. The obtained Pareto front is presented in Figure 8, which justifies the results presented in the Table 4 and also the extreme points of the Pareto front can be visualized that coincide with the Naider point. The

due and (Sa

obtained Pareto front matches well with the available Pareto front in the literature (Sadollah et al., 2015)

• Gear train Problem:

The purpose of this problem is to minimize the maximum size of any one of the gear simultaneously with the minimization of the gear ratio error with reference gear ratio of 1/6.931. All the design variables considered in this example can only take integer values as it represents the number of teeth on each gear. Hence, this problem is a min-max problem with discrete design variables, which offers additional challenges to an optimization algorithm. The system is shown in Figure

The problem can be stated as:

ation a

The system is shown in Figure 9. Minimize f^x) = ^

S and A a ased on the extreme NSGA-II

Minimize f2(x) = maxdx-t x2 x3 xA\) Where, 12 ^ x^, x2, xx4 ^ 60

The performance measures such as GD, S and A are not available for the gear train problems. So, the results are compared based on the extreme Pareto solutions available in the literature. The results are compared with NSGA-II and MOWCA. The results are summarized in Table

this pro Als

It can be observed from the results that the extreme Pareto solutions obtained by e — HTS is (9.92e-10,47) and (5.01e-1, 13). The value offj for the upper left corner of Pareto front obtained by e — HTS is 9.92e-10 which is lower than 1.83e-8 produced by NSGA-II and 4.5e-9 produced by MOWCA. So, it can be interpreted that the upper extreme result of Pareto front produced by e — HTS is better compared to both the algorithms. The value off2 at lower right corner of Pareto front obtained by e — HTS is 13 which is same that of NSGA-II and slightly higher to MOWCA(Sadollah et al., 2015). So, e — HTS has produced slight inferior results compared to MOWCA and equivalent to NSGAII for f2.The Pareto front for this problem is plotted on a log-scale in figure 10 to visualize the results properly. The errors

duced on the Pareto front can be easily understood and interpreted from the log-scale. Also, several discrete points can be observed on the extreme left part of the Pareto front which indicates the lower level of tooth meshing errors. Each point of the extreme left of the Pareto front produced indicates the tooth meshing errors and its variation on the log scale. Effective algorithms are required to find such points. This problem is a type of combinatorial problem and combination of design variables to produce such extreme results is highly desirable by design engineers.

• Multi-plate disc brake design:

Multi-plate disc brake finds its applications in airplanes, to apply effective braking while landing. The exploded view of the multi-pate disc brake is shown in figure 11. The purpose of the problem is to simultaneously minimize the mass of the brake and the stopping time. There are four design variables for the inner radius, outer radius, the engaging force (applied force) and the number of friction surfaces (number of friction plates). Out of this four-design variable number of friction surfaces can only take a discrete integer values which make it a mixed-integer problem. This is also a constrained problem for which five different restrictions are introduced for the distance between the radii of the friction plates, length o f the brake, pressure sustained by the plates, maximum limitation for the temperat ure generated and the braking torque. The problem can be stated as follows:

Minimize fx(x) = 4.9e — 5(x% — xl)(x4 — 1)

Minimize f2(x) = (9.82e6)-—

g1 = 20 + x1 — x2 g2 = 2.5(xA + 1) — 30

g3 = 3.14(xl—xl)2 — 0A

x23 — x

g4 = 2.22e 3x3(x2—x2)2

nnn (2.66e — 2x3x4(xl — x?y g5 = 900 — 1-

22 x2 x^

75 < x2< 110, 1000 < x3 < 3000, 2 <xA<20,xAEl

ider point used by e — HTS for this problem is (2.793, 16.6909) and the extreme to solutions obtained are (0.1634,16.6909) and (2.793, 2.071). The results are compared the value of spacing (A). The results are shown in Table 6, where the performance of e — HTS is compared with NSGA-II, pae — ODEMO and MOWCA. It can be noted from the results that e — HTS has produced better Acompared to NSGA-II, and pae — ODEMO. The performance of e — HTS is slight inferior compared to MOWCA. The Pareto front obtained by using e — HTS is shown in Figure 12, which is either same or better than the Pareto front given in (Sadollah et al., 2015). It is useful for the designer to note that the Pareto solutions becomes nearly stagnant for f at fx = 0.2 and for f2 at f2 = 1.75.

• Speed reducer problem:

The aim of this problem is to simultaneously optimize the weight of the gear assembly and the transverse deflection of the shaft. The assembly of the sped reducer is shown in Figure 13. The problem is subjected to different constraints on bending stress of the gear teeth, surfaces stress, transverse deflections of the shafts and stresses in the shafts. The design variables are the face width, module of teeth, number of teeth in the pinion, length of the first shaft between bearings, length of the second shaft between bearings and the diameter of the first and second shafts respectively. All the variables are indicated in the figure 13. The third variable is integer, the rest of them are continuous. So, this problem can be classified as a mixed-integer problem.

The problem can be stated as follows:

Minimize f1

= 0.7854x^2(33333x2 + 14.9334x3 — 43.0934) — 1.508x^x2 + x2) + 7.4777 (x2 + x3) + 0.7854(xAx2 + x^x2)

Minimize f2

9i = —(27/(x1x^x3) — 1) d2 = — (397.5/(x1x2x2) — 1)

ix2x3) ^

\12 *x2 )

/1.5 *x6 + 1.9 V x4

^1.1 *x7 + 1.9

911 • x x5

V91 > 0

2.6 <x1< 3.6,0.7 <x2< 0.8,17 <x3< 28, 7.3 < x4, x5 < 2.9 <x6< 3.9,5 <x7 < 5.5

Naider point used by e — HTS for this problem is(5969.91, 1071.54). The extreme Pareto solution obtained by e — HTS is (3002.11, 1071.54) and (5969.91,694.71). The results are compared based on the Spacing (S) with the other algorithms such as NSGA-II, Micro-GA, PAES and MOWCA (Sadollah et al., 2015). The results are presented in the Table7.

al- 2015)

, the Spaci

It can be observed from the results that, the Spacing of NSGA-II better compared to all the algorithms, but the extreme points of the Pareto front obtained by NSGA-II is inferior compared to other algorithms. Also, value of S is better for e — HTS. The Pareto front is given in Figure 14. It is useful for the designer to note that the Pareto solutions becomes nearly stagnant for f2 at f2 = 697.

• Welded beam design:

The purpose of this problem is to minimize the cost simultaneously with the end deflection. The problem is subjected to the constraints on shear stress, bending stress, weld length and the buckling load. The four different design variables are the height and the length of the welded joint and thickness and the width of the beam. All the design variables are continuous. The schematic diagram is shown in figure 15, where all the variables are shown rly. The value of objective functions differs much in its values and so such problems are difficult to solve with the weighted sum method. The problem can be stated as:

Minimize f1 = 1.10471xlx2 + 0.04811x3x4 (14 + x2) Minimize f2 = del

91 = —(tau — taumax)

92 = —(si9 — si9max)

+tau22

P = 6000, L = 14, E = 30e6,G = 12e6,taumax = 13600, sigmax = 30000, delmax = 0.25

0.125 < x1,x4 < 5,0.1 < x2, x3 < 10

The Naider point for this problem is taken as (35.3476, 0.014452). the extreme Pareto solutions obtained by e- HTS are (1.725, 0.014452) and (35.3476, 0.00043904). This problem is compared with the other techniques such as NSGA-II, paa — ODEMO, and MOWCA(Sadollah et al., 2015). The results are compared based on A and the mean values along with the calculated standard deviation are summarized in Table 8. It can be observed from the results that e — HTS possesses better spreading compared to the other algorithms. The Pareto front obtained by e — HTS is shown in Figure 16, where it justifies the results.

Spring design problem:

usly. The usly. T he

The purpose of this problem is to minimize stress and volume simultaneousl constraints are imposed on the minimum deflection, shear stress, surge frequency, limits on outside diameter and on design variables. The design variables are the wire diameter (d), the mean coil diameter (D), and the number of active coils (N). The schematic view of the spring is shown in Figure 17. This problem is special because all the design variables possess different characteristics. The number of turns can only take integer values where as diameter of the wire is standardized and it has to be selected from the set of available diameters. The mean coil diameter can be considered as a continuous variable. So, this problem is mixed-integer-discrete problem. The problem can be stated as follows:

Minimize f1(x) = (0.253.142x^x3(x1 + 2))

Minimize f2 (x) = 8KP

max 3.14x1

gi = 1.05x2(xi + 2) +—--l

g2 = dmin — x2

g3 = x2+ x3 — Dmax

gA = 3 — C g^ de de ^^^^

Pm - P

1 m a Y 1

g6 = delw — ■

g7 = 8KPmax 3 I4x3 — S

g8 = (0.253.142x2x3(xi + 2)) — Where,

P = 300

V_max =30

Pmax =1000

delw = 1.25

l = 14

dmin = 0.2 S = 189000 G = 11500000

1 < x1 < 32,1 < x3< 30,x1 e I

x2 e [0.009, 0.0095, 0.0104, 0.0118, 0.0128,0.0132, 0.014, 0.015, 0.0162, 0.0173, 0.018, 0.020, 0.023, 0.025,0.028, 0.032, 0.035, 0.041, 0.047, 0.054, 0.063, 0.072, 0.080,

2,0.105, 0.120, 0.135, 0.148, 0.162, 0.177, 0.192, 0.207, 0.225, 0.244,0.263, 0.283, 0.307, 0.331, 0.362, 0.394, 0.4375, 0.5]

0.09 0.30

This value for GD, S or A is not available in the literature for this problem so the results are compared with the extreme Pareto solutions. The extreme Pareto solutions are compared with NSGA-II and MOWCA. The results are presented in the Table 9. It can be observed from the results that the left extreme point obtained by e — HTS is slight inferior to NSGA-II and MOWCA (Sadollah et al., 2015), but e — HTS has obtained better non-dominated solutions

on the right extreme. The Pareto front is shown in Figure 18, which justifies the results. The Pareto front of this problem is discontinuous and of overlapping nature. This occurs because each discrete value for d possesses certain portion of the Pareto front, which can be observed from the Pareto front for different discrete values of d.

7. Conclusions:

Multi-objective version for heat transfer search (HTS) algorithm, called e — HTS, is proposed in this work. The proposed algorithm is experimented on numerical benchmark problems with different characteristics of Pareto front. The proposed method is also applied to different engineering design problems. The performance of the algorithm is calculated based o n generational distance, spacing and spreading along with its Pareto front. The results of e — HTS are compared with the different multi-objective variants of GA, PSO, DE, Bees and WCA. The results indicate that the proposed method is better or as par with the existing algorithms for solving multi-objective problems.

Acknowledgments:

The research of the 2nd author is supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). The postdoctoral fellowship of the 1st author is supported by NSERC.

Conflict of Interest:

The author has confirmed that there is no conflict of interest.

References:

Aghaei, J., Amjady, N., & Shayanfar, H. A. (2011). Multi-objective electricity market clearing considering dynamic security by lexicographic optimization and augmented epsilon constraint method. Applied Soft Computing, 11(4), 3846-3858.

Agrawal, S., Dashora, Y., Tiwari, M. K., & Son, Y. J. (2008). Interactive particle swarm: a Pareto-adaptive metaheuristic to multi-objective optimization. IEEE Transactions on systems, man, and cybernetics-Part A: Systems and humans, 38(2), 258-277.

Ahmadi, M. H., & Ahmadi, M. A. (2016c). Multi objective optimization of performance of three-heat-source irreversible refrigerators based algorithm NSGAII. Renewable and Sustainable Energy Reviews, 60, 784-794.

Ahmadi, M. H., Ahmadi, M. A., & Feidt, M. (2015). Thermodynamic analysis and evolutionary algorithm based on multi-objective optimization of performance for irreversible four-temperature-level refrigeration. Mechanics & Industry, 16(2), 207.

Ahmadi, M. H., Ahmadi, M. A., & Pourfayaz, F. (2016b). Thermodynamic analysis and

evolutionary algorithm based on multi-objective optimization performance of actual power generating thermal cycles. Applied Thermal Engineering, 99, 996-1005. Ahmadi, M. H., Ahmadi, M. A., Mellit, A., Pourfayaz, F., & Feidt, M. (2016a). Thermodynamic analysis and multi objective optimization of performance of solar dish Stirling engine by the centrality of entransy and entropy generation. International Journal of Electrical Power & Energy Systems, 78, 88-95. Akbari, R., Hedayatzadeh, R., Ziarati, K., & Hassanizadeh, B. (2012). A multi-objective artificial

bee colony algorithm. Swarm and Evolutionary Computation, 2, 39-52. Andersson, J. (2000). A survey of multiobjective optimization in engineering design. Department of Mechanical Engineering, Linkoping University, Linkoping, Sweden, Technical Report No: LiTH-IKP.

Angus, D., & Woodward, C. (2009). Multiple objective ant colony optimisation. Swarm

intelligence, 3(1), 69-85. Aydin, I., Karakose, M., & Akin, E. (2011). A multi-objective artificial immune algorithm for parameter optimization in support vector machine. Applied Soft Computing, 11(1), 120129.

Bandaru, S., Ng, A. H., & Deb, K. (2017). Data mining methods for knowledge discovery in multi-objective optimization: Part A-Survey. Expert Systems with Applications, 70, 139-159.

Berube, J. F., Gendreau, M., & Potvin, J. Y. (2009). An exact e-constraint method for bi-objective combinatorial optimization problems: Application to the Traveling Salesman Problem with Profits. European Journal of Operational Research, 194(1), 39-50.

Cavaliere, P., Perrone, A., & Silvello, A. (2016). Steel nitriding optimization through multi-objective and FEM analysis. Journal of Computational Design and Engineering, 3(1), 71-90.

Coello, C. A. C., & Lechuga, M. S. (2002). MOPSO: A proposal for multiple objective particle swarm optimization. In Proceedings of the congress on evolutionary computation (CEC'2002), Honolulu, HI, Vol. 1, (pp. 1051-1056). Coello, C. A. C., Van Veldhuizen, D. A., & Lamont, G. B. (2002). Evolutionary algorithms for

solving multi-objective problems (Vol. 242). New York: Kluwer Academic. Coello, C. C., Pulido, G. T., & Montes, E. M. (2005). Current and future research trends in

evolutionary multiobjective optimization. In Information processing with evolutionary algorithms (pp. 213-231). Springer London. Corne, D. W., Jerram, N. R., Knowles, J. D., & Oates, M. J. (2001, July). PESA-II: Region-based selection in evolutionary multiobjective optimization. In Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation (pp. 283-290). Morgan Kaufmann Publishers Inc..

Deb, K. (2001). Multi-objective optimization using evolutionary algorithms (Vol. 16). John Wiley & Sons.

Deb, K., Agrawal, S., Pratab, A., & Meyarivan, T. (2002). A fast and elitist multiobjective

genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, Vol. 6, 182-197.

Ehrgott, M., & Gandibleux, X. (2002). Multiobjective combinatorial optimization—theory, methodology, and applications. In Multiple criteria optimization: State of the art annotated bibliographic surveys (pp. 369-444). Springer US.

Gadhvi, B., Savsani, V., & Patel, V. (2016). Multi-Objective Optimization of Vehicle Passive Suspension System Using NSGA-II, SPEA2 and PESA-II. Procedia Technology, 23, 361-368.

Goicoechea, A., Hansen, D. R., & Duckstein, L. (1982). Multiobjective Decision Analysis with Engineering and Business Applications. Wiley.

Gong, M., Jiao, L., Du, H., & Bo, L. (2008). Multiobjective immune algorithm with

nondominated neighbor-based selection. Evolutionary Computation, 16(2), 225-255.

Horn, J., Nafpliotis, N., & Goldberg, D. E. (1994). A niched Pareto genetic algorithm for

multiobjective optimization. In Proceedings of the IEEE conference on evolutionary computation, IEEE world congress on computational intelligence, Piscataway, USA, (pp. 82-87).

Jamuna, K., & Swarup, K. S. (2012). Multi-objective biogeography based optimization for optimal PMU placement. Applied Soft Computing, 12(5), 1503-1510.

Khalili-Damghani, K., & Amiri, M. (2012). Solving binary-state multi-objective reliability

redundancy allocation series-parallel problem using efficient epsilon-constraint, multistart partial bound enumeration algorithm, and DEA. Reliability Engineering & System Safety, 103, 35-44.

Knowles, J., & Corne, D. (1999). The Pareto archived evolution strategy: A new baseline

algorithm for praetor multiobjective optimisation. In Evolutionary Computation, 1999. CEC 99. Proceedings of the 1999 Congress on (Vol. 1, pp. 98-105). IEEE.

Krishnanand, K. R., Panigrahi, B. K., Rout, P. K., & Mohapatra, A. (2011). Application of multi-objective teaching-learning-based algorithm to an economic load dispatch problem with incommensurable objectives. In Swarm, Evolutionary, and Memetic Computing (pp. 697705). Springer Berlin Heidelberg.

Laumanns, M., Thiele, L., & Zitzler, E. (2006). An efficient, adaptive parameter variation

scheme for metaheuristics based on the epsilon-constraint method. European Journal of Operational Research, 169(3), 932-942.

Li, H., & Zhang, Q. (2009). Multiobjective optimization problems with complicated Pareto sets, MOEA/D

and NSGA-II. Evolutionary Computation, IEEE Transactions on, 13(2), 284-302.

Luna, F., Durillo, J. J., Nebro, A. J., & Alba, E. (2010). Evolutionary algorithms for solving the

automatic cell planning problem: a survey. Engineering Optimization, 42(7), 671-690.

Marler, R. T., & Arora, J. S. (2004). Survey of multi-objective optimization methods for

engineering. Structural and multidisciplinary optimization, 26(6), 369-395.

Miettinen, K. (2012). Nonlinear multiobjective optimization (Vol. 12). Springer Science & Business Media.

Mondal, S., Bhattacharya, A., & nee Dey, S. H. (2013). Multi-objective economic emission load dispatch solution using gravitational search algorithm and considering wind power penetration. International Journal of Electrical Power & Energy Systems, 44(1), 282-292.

Moslehi, G., & Mahnam, M. (2011). A Pareto approach to multi-objective flexible job-shop scheduling problem using particle swarm optimization and local search. International Journal of Production Economics, 129(1), 14-22.

Mostaghim, S., & Teich, J. (2004, June). Covering pareto-optimal fronts by subswarms in multi-^ objective particle swarm optimization. In Evolutionary Computation, 2004. CEC2004. Congress on (Vol. 2, pp. 1404-1411). IEEE.

Nikoofard, A. H., Hajimirsadeghi, H., Rahimi-Kian, A., & Lucas, C. (2012). Multiobjective invasive weed optimization: Application to analysis of Pareto improvement models in electricity markets. Applied Soft Computing, 12(1), 100-112.

Omkar, S. N., Senthilnath, J., Khandelwal, R., Naik, G. N., & Gopalakrishnan, S. (2011). Artificial Bee Colony (ABC) for multi-objective design optimization of composite structures. Applied Soft Computing, 11(1), 489-499.

Patel, V. K., & Savsani, V. J. (2014b). A multi-objective improved teaching-learning based optimization algorithm (MO-ITLBO). Information Sciences.

Patel, V. K., & Savsani, V. J. (2015). Heat transfer search (HTS): a novel optimization algorithm. Information Sciences, 324, 217-246.

Patel, V., & Savsani, V. (2014a). Optimization of a plate-fin heat exchanger design through an improved multi-objective teaching-learning based optimization (MO-ITLBO) algorithm. Chemical Engineering Research and Design, 92(11), 2371-2382.

Qu, X., Liu, G., Duan, S., & Yang, J. (2016). Multi-objective robust optimization method for the modified epoxy resin sheet molding compounds of the impeller. Journal of Computational Design and Engineering, 3(3), 179-190.

Rao, S. S. (2009). Engineering optimization: theory and practice. John Wiley & Sons.

Reyes-Sierra, M., & Coello, C. C. (2006). Multi-objective particle swarm optimizers: A survey of the state-of-the-art. International journal of computational intelligence research, 2(3), 287-308.

Sadatsakkak, S. A., Ahmadi, M. H., & Ahmadi, M. A. (2015). Optimization performance and thermodynamic analysis of an irreversible nano scale Brayton cycle operating with Maxwell-Boltzmann gas. Energy Conversion and Management, 101, 592-605.

Sadollah, A., Eskandar, H., & Kim, J. H. (2015). Water cycle algorithm for solving constrained multi-objective optimization problems. Applied Soft Computing, 27, 279-298.

Schaffer, J. D. (1985). Some experiments in machine learning using vector evaluated genetic

y algorithms. Vanderbilt Univ., Nashville, TN (USA).

Srinivas, N., & Deb, K. (1994). Muiltiobjective optimization using nondominated sorting in

genetic algorithms. Evolutionary computation, 2(3), 221-248.

Tan, K. C., Goh, C. K., Mamun, A. A., & Ei, E. Z. (2008). An evolutionary artificial immune system for multi-objective optimization. European Journal of Operational Research, 187(2), 371-392.

Wang, Y., & Yang, Y. (2009). Particle swarm optimization with preference order ranking for multi-objective optimization. Information Sciences, 179(12), 1944-1959.

Yagmahan, B., & Yenisey, M. M. (2008). Ant colony optimization for multi-objective flow shop scheduling problem. Computers & Industrial Engineering, 54(3), 411-420.

Yang, X. S. (2011). Bat algorithm for multi-objective optimisation. International Journal of Bio-Inspired Computation, 3(5), 267-274.

Yang, X. S. (2013). Multiobjective firefly algorithm for continuous optimization. Engineering with Computers, 29(2), 175-184.

Yang, X. S., & Deb, S. (2013). Multiobjective cuckoo search for design optimization. Computers & Operations Research, 40(6), 1616-1624.

Zhang, H., Zhu, Y., Zou, W., & Yan, X. (2012). A hybrid multi-objective artificial bee colony algorithm for burdening optimization of copper strip production. Applied M cal

Zhang, Q. F., Zhou A.M., Zhao, S. Z., Suganthan P. N., Liu W., Tiwari S. (2009).

Multiobjective optimization test instances for the CEC 2009 special session and competition. Technical Report CES-487, University of Essex, Nanyang Technological University, and Clemson University.

Zhang, Q., & Li, H. (2007). MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Transactions on evolutionary computation, 11(6), 712-731.

Zhou, A., Qu, B. Y., Li, H., Zhao, S. Z., Suganthan, P. N., & Zhang, Q. (2011). Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm and Evolutionary Computation, 1(1), 32-49.

Zitzler E., Deb K., Thiele L.(2000). Comparison of multi-objective evolutionary algorithms: empirical results. Evolution Computing (8), 173-95.

Zitzler, E. and Thiele, L. (1999). Multiobjective evolutionary algorithms: A comparative case

study and the strength Pareto approach. IEEE Transactions on Evolutionary Computation, 3(4): 257-271.

Zitzler, E., Laumanns, M., & Thiele, L. (2001). SPEA2: Improving the strength Pareto

evolutionary algorithm for multiobjective optimization. In Proceedings of the EUROGEN 2001 - evolutionary methods for design, optimisation and control with applications to

Modelling, 36(6), 2578-2591.

Table 1 Method to construct Payoff-Table

Single objective optimization fi f2 Naider point

f1 ^ min 25 100 f2NP 100

f2 ^ min 90 15 f1NP 90

Table 2: Results(GD) for benchmark multi-objective problems

NSGA-II

6.98E-02

5.73E-03

9.32E-04

1.79E-04

1.25E-02

5.17E-03

1.25E-07

4.55E-06

3.79E-02

3.33E-02

5.80E-03

1.08E-03

2.40E-02

1.78E-03 1.17E-07

2.37E-03

7.24E-02

3.29E-

5.50E-03

1.69E-02

.14E-01 .15E-02

.47E-03

: 1.69E-02 i 1.34E-03

)2E-09

1.90E-04

7.01E-10

2.23E-05

1.52E-04

1.40E-11

1.18E-03

1.91E-01

4.75E-02

2.88E-05

1.97E-04

1.32E-04

2.77E-02

3.19E-03

1.40E-03

1.88E-02

1.92E-03

4.19E-05

8.70E-04

6.22E-09

Table 3 Results (A) for the multi-objective benchmark problems

ZDT1 ZDT2 ZDT3

NSGA-II(RC) 0.3903 0.4307 0.7385

NSGA-II(BC) 0.4632 0.4351 0.5756

SPEA 0.7301 0.6781 0.6657

PAES 1.2297 1.1659 0.78992

e-HTS 0.3359 0.4539 1.2741

Table 4 Result (S) for the four-bar truss problem

Algorithms S

Mean SD

NSGA-II 2.3635 0.2551

MOPSO 2.5303 0.2275

Micro-GA 8.2742 16.8311

PAES 3.2314 5.9555

MOWCA 2.5816 0.0298

e-HTS 1.1053 0.2352

Table 5: Result (extreme Pareto solutions) for the gear train problem

Algorithms Objective function

NSGA-II fx ^ min 1.83e-8 37

f2 ^ min 5.01e-1 13

MOWCA f1 ^ min 4.5e-9 43

f2 ^ min 7.32e-1 12

e-HTS f ^ min 9.92e-10 47

f2 ^ min 5.01e-1 13

Table 6 Result (A) for the disc brake problem

Algorithm A

Mean SD

NSGA-II 0.79717 0.06608

pas - ODEMO 0.8401 0.20085

MOWCA 0.46041 0.10961

e-HTS 0.6991 0.09257

Table 7 Result (S) for speed reducer problem

Algorithms S

Mean SD

NSGA-II 2.765 3.534

Micro-GA 47.80 32.80

PAES 16.20 4.268

MOWCA 16.68 2.697

e-HTS 14.38 1.864

Table 8 Results (A) for welded beam problem

Table 9 Result (extreme Pareto solutions) for spring problem

Algorithms fi f2

NSGA-II f ^ min 2.690 187053

f2 ^ min 24.189 61949

MOWCA f1 ^ min 2.668 188448

f2 ^ min 26.93 58752

e-HTS f ^ min 2.8241 183679.6

f2 ^ min 28.0502 56833.06

Naider Point (90,100) =

<jr.f2NP)

Figure 1 Placement of Naider Point

4 3.5 3 2.5

0 0.5 1 1.5

Figure 2 Pareto front for SCH fur ictii

function

о Obtained PF True PF

2.5 3 3.5 4

1 0.9 0.8 0.7 0.6 2 0.5 0.4 0.3 0.2 0.1 0

-True PF

о Obtained PF

-1-1-1-1-1—

0 0.1 0.2 0.3 0.4 0.5

—i-1-1-1—

0.6 0.7 0.8 0.9 1

Figure 3 Pareto front for ZDT1 function

Figure 4 Pareto front for ZDT2 function

Figure 5 Pareto front for ZDT3 function /

0.9 0.8 0.7 0.6 a 0.5

0.4 0.3 0.2 0.1

Figure 6 Pareto front for LZ function

True PF о Obtained PF

Ax = xx

Figure 7: Four bar truss problem

0.035 0.03 0.025 0.02 0.015 0.01 0.005

1175 1275 1375 1475

Figure 8 Pareto front for the four-bar truss prob

1575 1675

problem

Gear Train 52

o o o -o o 22

o o o - o o 17

o o 12

1 1 1 1 1 1 1 1 1 1

1.0E-10 1.0E-9 1.0E-8 1.0E-7 1.0E-6 1.0E-5 1.0E-4 1.0E-3 1.0E-2 1.0E-1 1.0E+0

Figure 10 Pareto front for gear train problem

Friction Plates

Figure 11 Exploded view for the multi-plate dis

18 16 14 12 10 8 6 4 2

1.5 f1

Figure 12 Pareto front for disc brake problem

Shaft-1

Bearings

Shaft-2

Length(x5),Diameter(x7)

Length(x4), Diameter(x6)

Figure 13 Speed reducer

1.1E+3 1.1E+3 1.0E+3 9.5E+2 9.0E+2 2 8.5E+2 8.0E+2 7.5E+2 7.0E+2 6.5E+2

Speed Reducer

^boouuo amm nmnmmn ran mmnna n iiniiMt i

6.0E+2

2.9E+3 3.4E+3 3.9E+3 4.4E+3 4.9E+3

5.4E+3 5.9E+3 6.4E+3

Figure 14 Pareto front for

r speed

d reduce

cer problem

Figure 15 Welded beam

0.016 0.014 0.012 0.01 2 0.008 0.006 0.004 0.002 0

0 5 10 15 20 25 30 35 40 45

Figure 16 Pareto front for welded beam problem

Figure 18 Pareto front for spring problem

Highlights:

A novel multi-objective optimization (MOO) algorithm is proposed. Proposed algorithm is presented to obtain the Pareto-optimal solutions. The multi-objective optimization algorithm compared with other work in the literature. Test performance of proposed algorithm on MOO benchmark/design engin problems.

eering