Author's Accepted Manuscript

Data-mining modeling for the prediction of wear on forming-taps in the threading of steel components

JOURNAL OF COMPUTATIONAL DESIGN AND ENGINEERING

Andres Bustillo, Luis N. López de Lacalle, Asier Fernández-Valdivielso, Pedro Santos

www.elsevier.com'bcate/jcde

PII: S2288-4300(16)30030-6

DOI: http://dx.doi.Org/10.1016/j.jcde.2016.06.002

Reference: JCDE59

To appear in: Journal of Computational Design and Engineering

Received date: 4 March 2016 Revised date: 16 June 2016 Accepted date: 26 June 2016

Cite this article as: Andres Bustillo, Luis N. López de Lacalle, Asier Fernández Valdivielso and Pedro Santos, Data-mining modeling for the prediction of wear on forming-taps in the threading of steel components, Journal of Computationa Design and Engineering, http://dx.doi.org/10.1016/j.jcde.2016.06.002

This is a PDF file of an unedited manuscript that has been accepted fo publication. As a service to our customers we are providing this early version o the manuscript. The manuscript will undergo copyediting, typesetting, an< review of the resulting galley proof before it is published in its final citable form Please note that during the production process errors may be discovered whic could affect the content, and all legal disclaimers that apply to the journal pertain

ACCEPTED MANUSCRIPT

Data-mining modeling for the prediction of wear on forming-taps in the threading of steel components

Bustillo, Andres3*; Lopez de Lacalle, Luis N.b, Fernandez-Valdivielso, Asierb; Santos, Pedroa

aDepartment of Civil Engineering, University of Burgos, Burgos, Spain.

bDepartment of Mechanical Engineering, University of the Basque Country UPV/EHU, Bilbao, Spain. Corresponding author: Telephone: +34 947258988, fax: +34 947258910 E-mail: abustillo@ubu.es

Abstract

An experimental approach is presented for the measurement of wear that is common in the threading of cold-forged steel. In this work, the first objective is to measure wear on various types of roll taps manufactured to tapping holes in microalloyed HR45 steel. Different geometries and levels of wear are tested and measured. Taking their geometry as the critical factor, the types of forming tap with the least wear and the best performance are identified. Abrasive wear was observed on the forming lobes. A higher number of lobes in the chamber zone and around the nominal diameter meant a more uniform load distribution and a more gradual forming process. A second objective is to identify the most accurate data-mining technique for the prediction of form-tap wear. Different data-mining techniques are tested to select the most accurate one: from standard versions such as Multilayer Perceptrons, Support Vector Machines and Regression Trees to the most recent ones such as Rotation Forest ensembles and Iterated Bagging ensembles. The best results were obtained with ensembles of Rotation Forest with unpruned Regression Trees as base regressors that reduced the RMS error of the best-tested baseline technique for the lower length output by 33%, and Additive Regression with unpruned M5P as base regressors that reduced the RMS errors of the linear fit for the upper and total lengths by 25% and 39%, respectively. However, the lower length was statistically more difficult to model in Additive Regression than in Rotation Forest. Rotation Forest with unpruned Regression Trees as base regressors therefore appeared to be the most suitable regressor for the modeling of this industrial problem.

Keywords

Roll-tap wear, roll taps, threading, forming taps, ensembles, Rotation Forest; Regression Trees

1. INTRODUCTION

There are two basic technologies for manufacturing internal threads: form tapping (using roll/form taps) and cut tapping (using cut taps). The first process is chipless because the thread is formed by a cold-working

process. Hence, stronger threads, particularly in materials susceptible to strain hardening, good thread calibration and a longer tool life are achieved. Form tapping is studied in the present work, applied in this case to a cold-forged piece, in which the holes were punched in a cold-forging process. In the case of form tapping, the thread is formed by deformation of the raw material in a cold-working process [Henderer et al.

1974]. This process causes an imperfection at a minor diameter of the formed threads (thread peaks) referred to as a claw or a split crest, although these imperfections imply no reduction in strength [de Carvalho et al. 2012], [Ivanov et al. 1997]. Claw shapes depend on the hole diameter before threading [Agapiou et al. 1994]. Form tapping can be performed on ductile steels, non-ferrous alloys [Chandra et al. 1975] and tempered steels [Chowdhary et al. 2003].

Stéphan [Stéphan et al. 2012] maintained an acceptable forming torque and deep enough threads to avoid stripping problems by optimization of the initial hole diameter. G. Fromentin studied the 3D plastic flow in form tapping, measuring material displacement [Fromentin et al. 2005] and F. Stéphan developed a 3D finite element model for form tapping with the ABAQUS 6.5 software program [Stéphan et al. 2009].

The prediction of tap wear involves three degradation phenomena: adhesive, abrasive and erosive wear. Adhesive wear is caused by the transfer of material from one surface to the other. Abrasive wear is caused by material removal from a solid surface, due to the sliding effect of hard particles or roughness peaks against the other contact surface. Finally, erosive wear is material loss from a solid surface, due to the action of a fluid containing solid particles.

Simulations were focused on external thread manufacturing by deformation [Zunkler et al. 1985]. Domblesky worked on the simulation of thread rolling with good accuracy [Domblesky et al. 1999] and then on the optimization of process parameters [Domblesky et al. 2002]. The most direct approach involves a macroscopic description of worn surfaces and empirical modeling of the wear based on the process parameters [Fernández et al., 2015].

Data-mining represents a collection of computational techniques, which analyze very complex phenomena. The most common data-mining techniques applied to manufacturing problems include Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), k-Nearest Neighbors Regressors, and Regression Trees. A combination of two or more models, known as an ensemble, sums the predictions capabilities of the combined models. Ensembles have demonstrated their superiority over single models in many applications. For instance, Yü [Yü, 2009] used ensembles to identify out-of-control signals in multivariate processes. Liao et al. [Liao, 2008] and Bustillo et al. [Bustillo 2014] used ensembles for grinding wheel and multitooth tool condition monitoring, respectively, while Cho [Cho, 2010] and Bisaeid [Bisaeid, 2009] used ensembles for end-milling condition monitoring and simultaneous detection of transient and gradual abnormalities in end milling. Ensembles have the advantage of circumventing the fine tuning of other artificial intelligence models such as ANNs [Bustillo 2011 a]. The most common types of ensemble techniques are Bagging, Boosting and Random Subspaces. Finally, a recent ensemble technique, Rotation Forest [Rodriguez 2006], has demonstrated a capability to model different industrial problems [Santos et al 2015]. All these techniques will be presented in detail in Section 3. To the best of the authors' knowledge, there are no other investigations that have modeled form tapping process outputs with data-mining techniques. One novel robust approach for root-cause identification in machining process using a hybrid learning algorithm and engineering-driven rules was developed by [Shichang 2012]. In contrast, [Mazahery, 2016] proposed the use of ANN for tribological behavior modeling of composites, adjusting the weights and biases in the network

during the training stage to minimize modeling error. In relation to aluminum nanocomposite

processing, [Mazahery, 2012] proposed the use of genetic algorithms to predict the mechanical properties and to optimize the process conditions and [Shabani 2015] used adaptive neuro-fuzzy inference systems combined with the particle swarm optimization method for process optimization.

The novelty of this paper resides in its combination of an experimental analysis and a data-mining model to extract as much information as possible on tool wear in form tapping processes, an industrial process in high demand. The Multilayer Perceptron, the most widely used standard artificial intelligence technique mentioned in the literature, was used to identify the baseline improvements of this new approach [Bustillo 2011 a]. This paper is structured as follows: at the end of this introduction, Section 2 presents the fundamentals of form tapping and the experimental set-up realized to obtain real data for this industrial process; Section 3 introduces the data-mining techniques that will be used to model these industrial data; Section 4 presents and discusses the experimental results of the measurements and of the modeling using the data-mining techniques; finally, Section 5 sums up the main conclusions obtained from this research and future lines of work.

2. FORM TAPPING FUNDAMENTALS AND EXPERIMENTAL PROCEDURE 2.1. Form tapping

Tap geometry is the most important parameter for a reliable process. The standard tap characteristics are chamfer length, the number of pitches in the chamfer, tap diameter and the number of lobes around a tap section. Figure 1 shows the geometry and features of a typical forming tap. All pictures showing taps are oriented with the tap tip to the left. As shown in Fig. 1, each rounded corner of a tap section is referred to as a lobe, where deformation or friction occurs against the inner surface of the previous hole. Hence, the tap section is defined by a curved side polygon that may typically have three, five, or six corners, which are referred to as lobes.

Figure 1. Terminology and geometry of roll taps [Fernandez et al., 2015].

Three type of lobes are distinguished in each tap: i) incremental forming lobes situated in the chamfer area; ii) calibration forming lobes around the nominal diameter; and, finally, iii) guiding lobes leading up to the tap shank. The shapes of the calibration and the guiding lobes are not so very different, so the precise frontier between them is a function of tap wear and is not exact, as will be shown later on. With regard to the application of data-mining techniques, the variation in length on the rake side (upper length) and in the length on the relief side has been considered (down length) for each lobe, as well as the variation in the total length of each lobe.

Two types of wear are observed in forming taps: material abrasive wear and adhesive wear. Abrasive wear is mainly observed in the lobes that remain in contact with the material during the material removal process. If they are in the chamfer region of the forming tap, the lobes are increasingly concave and spread out on the side at variable depths that are as deep as the part of the thread in contact with the tool. In the case of a lobe located on the guiding zone, with the nominal diameter, contact with the material is up until the moment of

elastic recovery, and the height of the areas on the flanks is at a maximum, since the thread is totally formed during the moment of contact.

Adhesive wear is defined as follows. Coating wear always begins at the front of the lobe. Initially, delamination is likely and then, as the wear becomes more progressive, it will probably be associated with abrasive friction. Steel may be deposited on the frontal part of the lobe, by adhesion on the area of the coating with abrasive wear. The material is deposited on the front side of the lobe and it extends to cover the lobe in the opposite direction to the abrasive wear of the back lobe. The adhesion is therefore described as runout.

2.2. Experimental set-up

The experimental objective was to test different type of roll taps with different geometries and levels of wear. The tapping conditions included external lubrication of 15% emulsion at a pressure of 6 bars, applying the following process parameters: M10x1.5 form tap ISO dimensions, 1200 mm/min feed speed, 1.5 mm/rev feed rate, 800 rpm rotation speed, and 30 m/min forming speed.

The taps shared the following characteristics: a) base material, high speed steel with TiN coating, and b) 6GX tolerance. Six taps were tested for 3 different types of roll taps (total 18 forming taps). Type 1 had a 5-pitch chamfer with a hexagonal (60°) geometry without oil grooves. Type 2 had a 3-pitch chamfer with a pentagonal (72°) geometry with oil grooves. Type 3 had a 3-pitch chamfer with a pentagonal (72°) geometry without oil grooves. The number of pitches was directly proportional to the length of the chamfer. A high number of pitches in the chamfer area causes problems in blind holes that have requirements for low clearance at the bottom of the hole. Each tap presented different levels of wear, due to previous working in a real workshop, reaching 1000, 2000, 3000, 4000 and 5000 threads, to be compared with a new unworn tap (Figure 2). Wear analysis and experimental results are exhaustively presented in [Fernandez et al., 2015], basically using a field microscope and a special jig for setting tool in the same coordinate measurement system, as it is a common practice in this kind of experiemntations.

Figure 2. Tap profile and wear for a new tap and alter 3000 and 5000 threads.

The thread length of a forming tap is usually manufactured with a higher tolerance than the cut tap equivalent in metric and diameter, due to the degree of elasticity of the material that means the work piece will always contract after any plastic forming process. Consequently, the fresh thread is always slightly smaller than the form tap profile. The operation has to be executed in one stroke, because it is almost impossible to manually repeat the form-tapping process after the first lamination, due to the strain hardening that is provoked by the first operation, which is not a problem in the case of cut taps. Therefore, it is necessary to manufacture the roll taps so that their upper tolerance limit is closest to the internal thread. For this reason, all the taps in the study were 6GX tolerance, to ensure that tolerance was not a factor that could affect the results.

The work piece material was a microalloyed HR45 steel, categorized under the group of microalloyed (HSLA

- High Strength Low Alloy) steels. HSLA steel has a fine ferrite-pearlite structure with a small addition (max 0.15%) of the combination of Al, Ti, Nb, and V elements. According to data provided by an end-user, it has the following characteristics: yield strength oyp = 359 MPa, ultimate tensile strength ou = 473 MPa and final strain A = 34.3%. The small test parts were cold forged and a punch was used to open two holes with a diameter of 9.3 mm and a hardness of 223 HV, measured inside the holes. Cold forging always induces strain hardening, so this measurement is important for a correct interpretation of the results that are presented later on. The test part was a strut that attaches a motor to a vehicle chassis; cold forging ensures a highly productive process.

The tests were performed on a 3-axis CNC machine with a 14 kW spindle. All the tests were performed under both dry and lubricated conditions, so as to compare the different taps, forming two threads under each of the two conditions. The objective of using coolant was to avoid overheating in the thread/tap contact area, to increase lubrication, and to study tool behavior under industrial conditions. Otherwise, the dry condition was used because monitoring was more directly related to tap damage state.

Thus, tap wear measurements were taken on all the lobes in the chamfer zone to establish a baseline wear value for the graph: on the lobes that were slightly smaller than the nominal diameter and on successively wider ones. When the nominal diameter lobes started to lose their geometry, the successive ones continued to work and they also registered wear. It should be remembered that the forming lobes induce strain hardening and therefore the last lobe in the chamfer zone deforms the work piece material in its hardest state. Deformation not completed by this lobe must be completed by the successive ones.

In short, the procedure consisted of measuring wear after form tapping, under two different conditions, dry and with drilling fluid, with new taps and with taps after 1000, 2000, 3000, 4000 and 5000 previous threads formed under real working conditions in a company that used the same type of toolholder and emulsion coolant.

2.3. Dataset generation

Having measured tap wear in each experiment, 3 datasets for the data-mining models were generated: one for each output. Among the variables that influenced the wear process, the following were considered: type of forming taps, number of threads and lobe of the tap. Lobes are numbered starting from the first one at the tap tip, as previously explained. Tap wear was evaluated in terms of the variation from nominal values of 3 lengths for each lobe of the tap: lower length, upper length (always starting from the maximum outer diameter of the forming lobe) and total length, as previously presented (see Figure 1 in Section 2). Table 1 summarizes the variables under consideration and their variation range and whether they act as input or output for the data-mining model. Although most of these variables take a limited number of values due to the definition of the experiment, the data-mining model will consider the number of threads and the 3 outputs as continuous variables, because they can take continuous values under industrial conditions. Other variables, such as the type of forming tap and tap lobe, are considered categorical variables: the type of forming tap only takes 3 possible values and the lobe of the tap takes 25, see Table 1. Only the 25 first lobes

are considered in the data-mining models, because they are the only ones common to the three forming

taps, even though the 3 types of roll tapshave a different number of lobes.

Table 1. Variables, units and ranges used to generate the dataset.

As previously explained, the smaller diameter tap lobes showed no wear until the chamfer diameter was sufficiently wide to engage the material. The data set therefore contains many zeros that reflect no-wear in the lobes. From the 435 instances of each of the 3 data sets, 109, 123 and 131 instances recorded a zero value for the output, see Table 1. These zero values mean that around 30% of the experiments show no-wear on the measured lobe of the tap. If a data-mining technique evaluates this data set it will be programmed to prioritize zero values to increase its accuracy.

3. DATA-MINING TECHNIQUES

As explained in the introduction, the data-mining discipline provides algorithms to forecast the values of some output variables from a set of input variables. There are two types of prediction techniques: regression, if the studied variable can take continuous values, and classification, when the output can only vary between a limited set of values. In our experimental work, a regression analysis is formed for each output variable, y. In this type of formulation, the forecast of the model is based on a function, f, of the so-called attributes (input variables), x. This general expression for regression is shown in Equation 1, considering a case with m attributes.

yeS,m„,ed = f (4 * , y ^ (Eq. 1)

The most precise technique for the problem under study was chosen from the algorithms of the state-of-the-art for regression. Root Mean Squared Error (RMSE) was used to compare the regressors This metric measures the deviation that the predicted class value has with regard to its real value. The expression of RMSE is given by the square root of the mean of the squares of the deviations, as Equation 2 shows, where each of the n instances of the data set are denoted by xi,y .

RMSE = y ^----(Eq. 2)

The most popular regression techniques were used in the tests. These algorithms can be divided into four families:

■ Decision tree regressors: the consideration of this type of techniques is necessary to understand the second family: the ensemble regressors.

■ Ensemble techniques [Kuncheva 2001]: the most frequently used algorithms were tested, as they have proved their suitability in several types of industrial tasks [Yu2009, Liao2008, Cho2010, Binsaeid2009, Bustillo 2011 b, Bustillo 2011 a, Bustillo 2014, Santos 2015].

■ Function-based regressors: ANNs [Beale1990] and Support Vector Regression (SVR) [Boser1992] were used. The choice of SVR and ANNs was due to their popularity. The success of ANNs in modeling several industrial problems should be noted [Tosun2002, Azadeh2008, Palanisamy2008, Quintana 2012], while SVR is a widely used regression technique [Wu2008].

The approach of instance-based regressors has a completely different formulation from the other families, as there is no analytic model to express the class value as a function of the attributes. Alternatively, the forecast value is obtained by the instance similarity to some other instances in the training dataset. The ^-Nearest Neighbors Regressor [Vijayakumar2006] was used in the tests as representative of this type of prediction.

3.1. Decision-tree-based regressors

Following a strategy known as divide and conquer, a decision-tree-based regressor uses recursive data divisions in a single attribute. Each division is performed by maximizing the differences observed between the class values [Quinlan 1992], while several statistical procedures may be used to determine the most discriminant decision.

There are two types of regressors based on decision trees, depending on what they store in their leaves. While a linear regression model is saved in each leaf of the model trees, an average of the values of a group of instances is used in the regression trees, minimizing the intrasubset variation in the class values [Khoshgoftaar 2002]. The Reduced-Error Pruning Tree (REPTree) [Witten 2005] and M5P [Quinlan1992] were chosen as the main representative techniques for the regression and the model trees, respectively.

Both regressors have two possible configurations: pruned and unpruned trees. Considering a single tree-based regressor and some type of ensembles of regression trees, it is more convenient to use pruned trees to avoid overtfitting. However, there are certain types of regression tree ensembles that can benefit from the use of unpruned trees [Ho 1998]. Ensembles of regression trees are described in the next section.

3.2. Ensemble Regressors

An ensemble regressor is a prediction method that combines the forecasts of several models: the base regressors [Dietterichl 2002]. In this work, the three most popular types of ensembles were used: Bagging [Breiman 1996], Boosting [Freund 1996] and Random Subspaces [Ho 1998]. In these three regression techniques, the same learning algorithm is used to train each base regressor, but different training sets are formed from the original. In all cases, we used regression trees as base regressors.

In Bagging, the training sets were formed using sampling with replacements (i.e., given a training set, a particular instance may not appear in it or could appear several times) [Breiman 1996]. The base regressors were independent, as the process followed to train each base regressor takes no account of the other regressor. One variant of Bagging is called IteratedBagging. This technique combines Bagging ensembles: the first one is formed by conventional Bagging, but the differences between the real and the predicted values (called residuals) are used in the training process for the others [Breiman 2001].

On the contrary, the base regressors in the case of Boosting, are sequentially trained, each one of which is influenced by the previous one [Freund 1995]. A weight is added to the instances, in order to change the training of the base regressors. The errors of previous regressors are used to reweight the instances, so that the next regressor is trained with the instances that were previously wrongly forecast. The final prediction of

Boosting also takes the accuracy of each base regressor into account. One common variation of Boosting

implementation for regression is AdaBoost.R2 [Drucker 1997]. In this case, the error of each base regressor is calculated using the loss function, L(i). Three types of loss functions were used in the experimentation.

If Den is the maximum value of a loss function, then linear loss, Lz, square loss, Ls and exponential loss,

L , are given by the expressions in Equation 3.

' L (i ) = l (i) / Den

Ls (i )=[l (i) / Den _f (Eq. 3)

L (i) = 1 - exp (-1 (i) / Den)

Random Subspaces uses subsets of fewer dimensions than the original data set to train the base regressors. This methodology has two goals. On the one hand, to avoid the well-known problem of the curse of dimensionality (many regressors decrease their performance when the data sets have a large number of attributes), and, on the other, to improve prediction accuracy, by choosing low correlated base regressors.

Rotation Forest [Rodriguez 2006] is the most recent ensemble tested in this research. It trains each base regressor, in this case RepTrees or M5P Modal Trees, by grouping their attributes into subsets (e.g., subsets of three attributes are usually taken). Then Principal Component Analysis is computed for each group using a subsample from the training set. The whole dataset is transformed according to these projections and can then be used to train a base regressor.

Five ensemble regressors were used in the experimentation: Bagging, IteratedBagging, Random Subspaces and Additive Regression, a hybrid between Bagging and Boosting ensembles [Friedman 2002] and Rotation Forest.

3.3. Artificial Neural Networks

Inspired by biological processes, ANNs are based on the information transmission system used by neurons in the brain [Dayhoff 2001]. In this research, the Multilayer Perceptron (MLP) [Dayhoff 2001] was used, because of its popularity as an ANN variant that can approximate functions of great complexity [Hornik 1989]. The structure of this model is formed of three layers [Delashmit 2005]. The first one has the inputs (attributes), the second one is called the hidden layer, and the third one the output layer). Equation 4 shows

how the prediction, youtput, is obtained. The output of the hidden layer, yhide, is used as the input of the output layer. W1 and W2 denote the weight matrix of the hidden and output layer, respectively, B and B2 their biases, and fnetand foutpMitheir activation functions.

'^hrde = fnet (W1 * + B1 ) itput foutput

( ^ (Eq4)

youtput ~ foutput(W2yhide + B2 )

These activation functions vary with the structure. In our experimental work, the most common structure was used, where the activation function for the hidden layer is the identity and the activation function for the output layer is tansig [Delashmit 2005].

3.4. Support Vector Regressor

This technique makes its prediction using a function, f (x), with a set of parameters that are calculated during the training stage, in order to minimize the RMSE [Smola 2004]. The formulation of this function for a linear case is given by Equation 5. The prediction, f (x), from the input attributes, x, is obtained by using terms, w and b , where <w,x) is the inner product in the input space, X.

f (x) = <w,x) + b with w e X b e^ (Eq. 5)

The desired function should fit the training examples as closely as possible, but without overfitting the data, in order to define a formulation that may be generalized to other instances. In other words, the desired function shows low prediction deviations, but as flat as possible [Smola 2004]. Expressing these two

objectives in an optimization problem, the target is for f (x) to have at most sdeviation from the forecast outputs, y , for all the training data, and at the same time to minimize the norms of w. A certain degree of

error will therefore be allowed in the forecasts, but it is limited by s. In real data, solving the previously described optimization problem may be unfeasible or lead to overfitting, if there are some instances with large deviations of the general trend. For that reason, Boser et al [Boser 1992] redefined the optimization problem, allowing deviations larger than s for some instances. Consequently, an extra term, C, was added to the formula, as shown in Equation 6. This parameter is a trade-off between the flatness and the deviations of the errors larger than s .

minimize ^11 w ||2 +C]T (t )

(Eq. 6)

yi -<w,x>-b <s + ti < w, x > + b - y. <e + $

The optimization problem presented in Equation 6 belongs to the convex type, and can be solved using the Lagrange method. Equation 7 shows the so-called dual problem that is associated with it.

maximize

-1 yL(ai- a* \aj- a*j )<x, xj >- £yZ(ai- a* )+Z yi (ai- a*)

2 i, j =1 i=1 i=1

(Eq. 7)

T(a, -a*)= 0

j e [0, C]

The formulation given in Equation 7 refers only to the linear case, but a generalization using a non-linear

function is possible. In the general definition of the SVR, the inner products are not directly calculated in the

original input feature space, but the so-called kernel function, k(*,* ) is used instead. This function fulfils a

set of conditions (called Mercer's conditions) [Cortes 1995]. Linear and radial basis, the two most frequently referenced kernels in the literature, were used in the experimentation.

3.5. K-Nearest Neighbor Regressor

In this data-mining technique, the predicted class value is the mean of the k most similar training instances, which were previously stored [Aha 1991]. Euclidean distance was the most commonly used function to measure the similarity between instances. In our experimental work, the number of neighbors was optimized using cross validation.

4. RESULTS AND DISCUSSION

4.1. Experimental results

In this section, the results of the experiments on wear measurements are present. First, a visual inspection was undertaken to determine whether the taps had suffered any damage such as lobe ruptures or excessive material adhesion. The lower length and the upper length each lobe edge were measured, commencing in each case with the maximum outer diameter of the forming lobe, as shown in Figure 3. A magnified worn area of a lobe is shown on the right of Figure 3, measured with an optical microscope. All measurements were taken by positioning taps on the same fixture, so as to avoid projection errors and to ensure sufficient repeatability. In this figure, the edge that engages the material is the rake edge, while the other length of the edge is the relief edge (both standard industrial terms in cutting operations).

Figure 3. Forming lobe wear: Left) tap rotation; Right) Wear on the relief (lower) and rake (upper) flanks.

As shown in Figure 3, the first part of the edge in the direction of rotation that engages the material is the rake edge, which is the part of the edge that suffers abrasion wear. Following the flank actions of the rake, the relief flank that is in contact with the material undergoes abrasion and adhesion wear, as its edge removes material. A typical "wave on sand" pattern caused by adhesion is shown in Figure 3. Total lobe wear is given by the sum of the wear on both flank lengths.

The wear on each lobe is illustrated in Figures 4 to 6. Here, the lobes are numbered starting from the first one at tap tip. One hypothesis would suggest that worsening wear on each lobe depends on the general wear of the roll tap, because wear on previous lobes would imply a greater strain on the successive ones, as is clearly explained below. All the wear patterns corresponded to abrasion on the rake flank and abrasion and slight adhesion on the relief flank. The limits of the chamfering zone are indicated on the figures.

Figure 4. Evolution

"Type 1" roll taps with previous wear of 1000, 3000 existent wear on the first lobes to levels of about 0.1

of "Type 1" forming tap wear.

and 4000 threads showed a rising trend line from non-mm. Furthermore, the higher the number of threads, the

higher the tap wear will be. Wear was almost zero for the first 4 lobes of an almost new tap with "only" 1000

previous threads. It then increased to reach a constant value after the 10th lobe. A provisional conclusion was therefore reached that the first lobe of nominal diameter will not be the most highly damaged.

However, the trend lines of tap wear with 2000 and 5000 threads had a non-linear evolution. The wear was not uniform in close lobes. The lobes with greater wear interacted more with the material, while the lobes with less wear may have been working less due to the eccentricity of the tap, which could be the explanation for this random behavior.

Figure 5. Evolution of "Type 2" forming tap wear.

Contrary to the previous case, the results for "Type 2" roll taps (Figure 5) pointed to strong wear for the first lobes and a rapid decrease for the following ones to a constant value between the 7th and 16th lobe (Figure 5). As shown in Figure 5, wear decreased to zero for the 5 cases (taps with 1000 to 5000 previous threads), it was higher for the last level (5000 threads), and was lower for the first level (1000 threads). Wear evolution was completely different to that of the "type 1" taps. A difference explained by the increased diameter between the pitches in the chamfer zone that was greater than the increase of subsequent threads; the first lobe that reached the nominal diameter worked very intensely.

Figure 6. Evolution of "Type 3" forming tap wear.

For "Type 3" roll taps (Figure 6), the evolution of wear started from zero for the two first lobes, increased to a maximum and decreased again. As shown in Figure 6, wear increased as the tap diameter increased. Once the final thread diameter had been reached, wear on the subsequent lobes decreased, due to lower contact with the material as the thread had recently been formed. The lobes engaging with the material underwent lighter deformation that resulted in less wear (15th to 25th lobe).

The different tap wear behaviors can be attributed to the geometry of the chamfer tap (5-pitch chamfer for "Type 1" forming taps, 3-pitch chamfer for "Type 3" and "Type 2" forming taps) and the number of lobes in each section (hexagonal in "Type 1", and pentagonal in "Types "2 and 3"). The geometry of "Type 1" led to a more progressive deformation of the material and therefore implied less wear.

In the three roll taps under study, higher wear appeared in the chamfer lobes. When form tapping was at the nominal diameter, the guiding edges were in contact with previously deformed material. For this reason, wear decreased gradually at these edges until it reached zero wear values. Ten guiding lobes in the three roll taps were necessary to reach zero wear level.

4.2. Data-mining prediction results

Tap forming is a complex process, in which a direct relationship between process inputs and the output (tap wear) is not easily established. This conclusion has been outlined in the previous analysis of the experimental tests and may also be extracted from a data-mining perspective, if the scatter plots of inputs

and outputs are drawn. Figure 7 presents the scatter plots for each input (type of forming taps, lobe of the

tap and number of threads) and one of the outputs (upper length). These plots show that there is no obvious relation between the input variables and the output. Each symbol, blue square, red circle and yellow cross, represents a different type of forming tap (1 to 3 types respectively). The same conclusion can be extracted for the other two outputs: lower length and total length. A suitable method is therefore needed to model the tap wear form from the process input variables.

Figure 7. Scatter plots for each predictive variable and the upper length.

However, before the most suitable model for tap wear can be identified, it is necessary to establish a baseline or threshold that assures the right performance of the data-mining models or rejects their predictions due to low quality. The Root Mean Square Error (RMSE) for the predicted values related to the measured values was taken as a quality indicator. Two baseline approaches were considered: the first one is a naive approach, which considers the mean value of each output as the most probable value for each condition; the second one considers a linear fit of the 3 inputs for each output. Table 2 shows the mean, maximum and minimum values of each of the 3 outputs in the dataset and the RMSE values calculated for the naive approach and the linear approach of each output for the whole dataset. From the industrial point of view, any prediction model that reduces the RMSE by more than 20% will provide useful information to the process engineer on tap wear

Table 2. Dataset variation of the outputs and RMSE for the 2 proposed baselines techniques.

As explained in Section 3, the main techniques of the state of art for regression were tested, including regression trees (RepTrees and M5P trees), ensembles (Bagging, Iterated Bagging, Random Subspaces, Adaboost, Additive Regression and Rotation Forest), SVRs, ANNs (MLPs), and nearest neighbor regression. Table 3 summarizes the abbreviations used to denote these techniques. As regards notation, two further considerations should be taken into account. Firstly, in the case Adaboost.R2, the type of loss function is indicated by the suffixes "L", "S" and "E" (Linear, Square and Exponential, respectively). Secondly, whether the trees are pruned or unpruned is indicated between brackets (P or U).

A 10 x 10-fold cross-validation procedure [Cho, 2010] was followed, to generalize the results of the data mining techniques. In 10-fold cross-validation, the data is divided into 10 folds. Nine folds are used to build a model and the other fold is used to evaluate the model. The estimation obtained with cross-validation is the average of 10 values obtained for each fold. Cross-validation was repeated 10 times, that is 10 x 10-fold cross validation, averaging the results obtained from each cross-validation, in order to reduce the variance of the final estimation. In this way the prediction of the model is the average of the predictions of 100 models. Weka software [Witten, 2005] was used for modeling and validation.

Table 3. Methods Notation The parameters of the techniques were chosen as follows:

■ The number of base regressors in the ensembles was set to 100

■ In the SVR with linear Kernel, the trade-off parameter was optimized in the range of 2--8, while in the radial basis case, the optimization ranges were 1--16 for C and 10-5 to 10-2 for y

■ The training parameters momentum, learning rate and number of neurons of the neural networks were optimized in the ranges 0.1--0.4, 0.1--0.6, 5--15, respectively

■ In the ^-nearest neighbor regressor, the optimal number of neighbors was chosen from 1 to 11

In Table 4 the detailed RMSE values are shown for the three outputs under study, where an asterisk indicates the regressors that were statistically worse than the best one for each output (the reference for the test for each output is the regressor with lower RMSE, which is in bold). Table 2 also collects the RMSE for two approaches considered as baselines for a final discussion of the results. Two main considerations can be extracted from the results of Table 4:

■ Two methods present the lower RMSE for the three outputs: Rotation Forest with unpruned REPTree as base regressors for the lower length, and Additive Regression with unpruned M5P as base regressors for the upper and total lengths. However, the second method is statistically worse than the first one to model the lower length.

■ There are only two methods which do not lose in any of the three outputs: Rotation Forest with unpruned REPTree as base regressors (the winner for the lower length) and Iterated Bagging with unpruned M5P as base regressors.

Finally, if we compare the results of these prediction models in Table 4 with the two approaches considered as baselines, new conclusions may be extracted. First, all the data-mining techniques under consideration showed a better performance than the naive approach (e.g. for the lower length the RMSE of the tested techniques was in the range of 0.09-0.12 mm and was 0.14 mm for the naive approach). Second, not all of the data-mining Techniques under consideration showed a better performance than the linear fit (e.g. the RMSE of the tested techniques for the upper length were within the range 0.12-0.18 mm and the linear fit was 0.16 mm), There are two reasons that can explain this result: first, the reduction of the dataset size due to the cross-validation technique does not apply to the linear fit, which is significant due to the small size of the dataset; second, training and validation are done on the same data in the linear fit case, while the cross validation technique applied to data-mining models used new data for validation that was not presented in the training step. The most important conclusion, however, even though data mining undergoes a loss of accuracy due to cross validation (because only part of the experimental dataset is used for the training stage, keeping part of the instances for the validation stage), is that there are still some data-mining models that clearly improve the precision of the two baseline techniques (that use the whole experimental dataset in the training stage). For example Rotation Forest with unpruned REPTree as base regressors reduced the RMSE of the linear fit by 33% for the lower length, and Additive Regression with unpruned M5P as base regressors reduced the RMSE of the linear fit by 25% and 39%, for the upper and total lengths respectively. Taking all these considerations into account, Rotation Forest with unpruned REPTree as its base regressors appeared to be the most suitable regressor with which to model this industrial problem.

Table 4. RMSE values for all the tested data-mining techniques

ACCEPTED MANUSCRIPT

To summarize, the most accurate Data Mining technique is Rotation Forest with unpruned REPTree as base regressors. It predicts the lower, upper and total wear length with an RMSE of 0.08, 0.13 and 0.16 mm respectively. Under similar training conditions, the Naive approach predictions presents an RMSE of 0.14, 0.20 and 0.30 mm for the same outputs and the linear approach an RMSE of 0.12, 0.16 and 0.23 mm, clearly higher than the RMSE of the Rotation Forest in all cases.

5. CONCLUSIONS

This paper has presented a study that has analyzed the process of wear during tap forming for threading of cold forged steel parts from an experimental and a data-mining perspective. Wear observed on the forming lobes was of the abrasive type. This type of wear implies the loss of lobe diameter involving successive lobes. The "Type 1" forming tap had the least wear and showed the best performance. Forming tap geometry appeared to be a very important factor. The "Type 1" hexagonal section tap produced superior results to the "Type 2" and the "Type 3" pentagonal section tap. The higher the number of lobes in the chamber zone (5-pitches) and around the nominal diameter (6 lobes) resulted in a more uniform load distribution and a more gradual forming process

The second objective of this study was to identify the most accurate data-mining-based model to solve this real-life industrial problem. The data set consisted of 285 instances with 3 inputs and 3 outputs. Several methods were investigated, including all the main techniques of the state of art for regression: regression trees, ensembles, SVRs, ANNs and k nearest-neighbor regression. 10x10 cross-fold validations were performed to generalize the prediction results of these models.

Two baseline approaches were considered to analyze the performance of the data-mining models: the first considered the mean value of each output as the most probable value for each condition; the second considered a linear fit of the 3 inputs for each output. The most accurate model was Rotation Forest with unpruned REPTree as its base regressors; it reduced the RMSE of the linear fit by 33% for the lower length, and the Additive Regression with unpruned M5P as base regressors that reduced the RMSE of the linear fit for the upper and total lengths by 25% and 39% respectively. However, Additive Regression was statistically worse than Rotation Forest at modeling the lower length, and therefore, Rotation Forest with unpruned REPTree as its base regressors appeared to be the most suitable regressor for the modeling of this industrial problem.

Future work will consider other ensemble methods, using ensembles of other methods instead of regression trees and will study the use of non-homogeneous ensemble models, ensembles built by combining different methods (e.g., SVM and RBF), that might improve the final accuracy of the model.

ACKNOWLEDGEMENTS

This investigation was partially supported by Projects TIN2011-24046, IPT-2011-1265-020000 and DPI2009-06124-E/DPI of the Spanish Ministry of Economy and Competitiveness. We thank the UFI in Mechanical Engineering of the UPV / EHU for its support. In addition, we gratefully acknowledge the advice of I. Azkona

and J Fernández from Metal Estalki and Dr. J. Maudes from the University of Burgos. Finally, our thanks also

to J.M. Pérez for his invaluable advice.

REFERENCES

[Agapiou et al. 1994] J.S. Agapiou, 1994. Evaluation of the effect of high speed machining on tapping, Journal of Manufacturing Science & Engineering Technology, ASME, Vol. 116, pp 457-462.

[Aha 1991] D. W. Aha, D. Kibler, and M. K. Albert, "Instance-based learning algorithms," Machine learning, vol. 6, no. 1, pp. 37-66, 1991.

[Akaike 1974] H. Akaike, "A new look at the statistical model identification," Automatic Control, IEEE Transactions on, vol. 19, no. 6, pp. 716-723, 1974.

[Azadeh 2008] A. Azadeh, S. Ghaderi, and S. Sohrabkhani, "Annual electricity consumption forecasting by neural network in high energy consuming industrial sectors," Energy Conversion and Management, vol. 49, no. 8, pp. 2272-2278, 2008.

[Beale 1990] R. Beale and T. Jackson, Neural computing: an introduction. Bristol, UK, UK: IOP Publishing Ltd., 1990

[Binsaeid, 2009] Binsaeid, S., Asfour, S., Cho, S. and Onar, A. 2009, "Machine ensemble approach for simultaneous detection of transient and gradual abnormalities in end milling using multisensor fusion", Journal of Materials Processing Technology, vol. 209(10), pp. 4728-4738.

[Boser 1992] B. Boser, I. Guyon, and V. Vapnik, "A training algorithm for optimal margin classifiers," in Proceedings of the fifth annual workshop on Computational learning theory. ACM, 1992, pp. 144-152.

[Breiman, 1996] Breiman, L. 1996, "Bagging predictors", Machine Learning, vol. 24(2), pp. 123-140.

[Breiman 2001] L. Breiman, "Using iterated bagging to debias regressions," Ma chine Learning, vol. 45, no. 3, pp. 261-277, 2001.

[Brown, 2006] Brown, G., Wyatt, J. and Tino, P. 2006, "Managing Diversity in Regression Ensembles", Journal of Machine Learning Research, vol. 6, pp. 1621-1650.

[Bustillo 2011 a] A. Bustillo, J.F. Díez-Pastor, G. Quintana and C. García-Osorio, "Avoiding neural network fine tuning by using ensemble learning: application to ball-end milling operations", The International Journal of Advanced Manufacturing Technology, 57(5), 2011, 521-532.

[Bustillo 2011 b] A. Bustillo, E. Ukar, J. J. Rodriguez, A. Lamikiz "Modelling of process parameters in laser polishing of steel components using ensembles of regression trees", International Journal of Computer Integrated Manufacturing, 24(8), 2011, 735-747.

[Bustillo 2014] A. Bustillo, J. J. Rodriguez, "Online breakage detection of multitooth tools using classifier ensembles for imbalanced data", International Journal of Systems Science, 2014, 45(12), 2590-2602.

[Chandra et al. 1975] R. Chandra, S.C.Das, 1975. Roll taps and their influence on production, Journal of India Engineering, Vol. 55, pp 244-249

[Cho, 2010] Cho, S., Binsaeid, S. and Asfour, S. 2010, "Design of multisensor fusion-based tool condition monitoring system in end milling", International Journal of Advanced Manufacturing Technology, vol. 46, pp. 681-694.

[Chowdhary et al. 2003] S. Chowdhary, S.G. Kapoor, R.E.DeVor, 2003. "Modelling forces including elastic recovery for internal thread forming", Journal of Manufacturing Science & Engineering, ASME, Vol. 125, pp 681-688.

[Cortes 1995] C. Cortes and V. Vapnik, "Support-vector networks", Machine learning, vol. 20, no. 3, pp. 273

297, 1995.

[Dayhoff 2001] J. E. Dayhoff and J. M. De Leo, "Artificial neural networks," Cancer, vol. 91, no. S8, pp. 1615— 1635, 2001.

[de Carvalho et al. 2012] A. Olinda de Carvalho, L. C. Brandao, T. H. Panzera, C.H. Lauro, 2012. Analysis of form threads using fluteless taps in cast magnesium alloy (AM60), Journal of Materials Processing Technology, Vol. 212, Issue 8, pp. 1753-1760.

[Delashmit 2005] W. H. Delashmit and M. T. Manry, "Recent developments in multilayer perceptron neural networks," in Proceedings of the seventh Annual Memphis Area Engineering and Science Conference, MAESC, 2005.

[Dietterichl 2002] T. G. Dietterichl, "Ensemble learning," The handbook of brain theory and neural networks, pp. 405-408, 2002.

[Domblesky et al. 1999] J.P. Domblesky, 1999. Computer simulation of thread rolling processes, Fastener Technology International, n. 8, pp. 38-40.

[Domblesky et al. 2002] J.P. Domblesky, F. Feng, 2002. A parametric study of process parameters in external thread rolling, Journal of Materials Processing Technology 121, pp. 341 -349.

[Drucker 1997] H. Drucker, "Improving regressors using boosting techniques," in ICML, vol. 97, 199 7, pp. 107-115.

[Du, 2010] Du, S., Lv, J. and Xi, L. 2010, "An integrated system for on-line intelligent monitoring and identifying process variability and its application", International Journal of Computer Integrated Manufacturing, vol. 23, pp. 529-542.

[Fernandez et al 2015 ] Fernández Landeta, Javier; Fernández Valdivielso, Asier; López de Lacalle, L.N. , Girot, Franck, Pérez Pérez, J.M, 2015, Wear of form taps in threading of steel cold forged parts, Journal of Manufacturing Science and Engineering. Transac of ASME, 2015, vol. 137.

[Freund 1995] Y. Freund and R. E. Schapire, "A deci sion-theoretic generalization of on-line learning and an application to boosting," in Computational learning theory. Springer, 1995, pp. 23-37.

[Freund 1996] Y. Freund, R. E. Schapire et al., "Experiments with a new boosting algorithm," in ICML, vol. 96, 1996, pp. 148-156.

[Friedman 2002] J. H. Friedman, "Stochastic gradient boosting," Computational Statistics & Data Analysis, vol. 38, no. 4, pp. 367-378, 2002.

[Fromentin et al. 2005] G. Fromentin, G. Poulachon, A. Moisan, 2005. Precision and surface integrity of threads obtained by form tapping. CIRP Annals - Manufacturing Technology, Vol. 54, n. 1, pp 519-522.

[Henderer et al. 1974] W.E Henderer, B.F. von Turkovich, 1974. Theory of the cold forming taps, Annals of the CIRP, Vol. 23, pp. 51-52.

[Ho 1998] T. K. Ho, "The random subspace method for constructing decision forests," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 20, no. 8, pp. 832-844, 1998.

[Hornik 1989] K. Hornik, M. Stinchcombe, and H. White, "Multilayer feedforward networks are universal approximators," Neural networks, vol. 2, no. 5, pp. 359-366, 1989.

[Ivanov et al. 1997] V. Ivanov, V. Kirov, 1997. Rolling of internal threads: Part 1, Journal of Materials Processing technology, Vol. 72, pp. 214-220.

[Khoshgoftaar 2002] T. M. Khoshgoftaar, E. B. Allen, and J. Deng, "Using regression trees to classify fault -

prone software modules," Reliability, IEEE Transactions on, vol. 51, no. 4, pp. 45 5-462, 2002.

[Kuncheva 2001] 43. L. Kuncheva, "Combining classifiers: Soft computing solutions," Pattern Recognition: From Classical to Modern Approaches, pp. 427-451, 2001.

[Liao, 2008] Liao, T., Tang, F., Qu, J. and Blau, P. 2008, "Grinding wheel conditi on monitoring with boosted minimum distance classifiers", Mechanical Systems and Signal Processing, vol. 22, pp. 217 -232.

[Lin, 2008] Lin, H.-T. and Li, L. 2008, "Support Vector Machinery for Infinite Ensemble Learning", Journal of Machine Learning Research, vol. 9, pp. 285-312.

[Mazahery, 2016] Ali Mazahery, Mohsen Ostad Shabani, The accuracy of various training algorithms in tribological behavior modeling of A356-B4C composites, Russian Metallurgy (Metally) 2011 (7), 699-707

[Mazahery, 2012] Ali Mazahery, Mohsen Ostad Shabani, Assistance of novel artificial intelligence in optimization of aluminum matrix nanocomposite by genetic algorithm, Metallurgical and Materials Transactions A 43 (13), 5279-5285, 2012.

[Palanisamy 2008] P. Palanisamy, I. Rajendran, and S. Shanmugasundaram, "Prediction of tool wear using regression and ANN models in end-milling operation," The International Journal of Advanced Manufacturing Technology, vol. 37, no. 1-2, pp. 29-41, 2008.

[Quintana 2012] G. Quintana, A. Bustillo, J. Ciurana, "Prediction, monitoring and control of surface roughness in high-torque milling machine operations, International Journal of Computer Integrated Manufacturing, 25(12), 2012, 1129-1138.

[Quinlan 1992] J. R. Quinlan, "Learning with continuous classes," in Proceedings of the 5th Australian joint Conference on Artificial Intelligence, vol. 92. Singapore, 1992, pp. 343-348.

[Rodriguez 2006] J. Rodriguez, L. Kuncheva, and C. Alonso, "Rotation forest: A new classifier ensemble method", Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 28, no. 10, pp. 16191630, 2006.

[Santos 2015] P. Santos, J. Maudes, A. Bustillo, "Identifying maximum imbalance in datasets for fault diagnosis of Gearboxes", Journal of Intelligent Manufacturing, 2015, DOI 10.1007/s10845-015-1110-0

[Shabani 2015] Mohsen Ostad Shabani, Mohammad Reza Rahimipour, Ali Asghar Tofigh, Parviz Davami, Refined microstructure of compo cast nanocomposites: the performance of combined neuro-computing, fuzzy logic and particle swarm techniques, Neural Computing and Applications, 2015

[Shichang 2012] Shichang Du , Jun Lv, Lifeng Xi, 2012, A robust approach for root causes identification in machining processes using hybrid learning algorithm and engineering knowledge, Journal of Intelligent Manufacturing, vol. 23, 5, pp 1833-1847

[Smola 2004] A. J. Smola and B. Scholkopf, "A tutorial on support vector regression," Statistics and computing, vol. 14, no. 3, pp. 199-222, 2004.

[Stephan et al. 2009] F. Stephan, J. Guillot, P. Stephan, A. Daidie, august 2009. 3D Finite Element Modelling of an Assembly Process With Thread Forming Screw, Journal of Manufacturing Science and Engineering, Vol. 131 / 041015 pp 1-8.

[Stephan et al. 2012] P. Stephan, F. Mathurin, J. Guillot, 2012. Experimental study of forming and

tightening processes with thread forming screws. Journal of Materials Processing Technology, Vol. 212, n. 4, pp. 766-775.

[Tosun 2002] N. Tosun and L. Ozler, "A study of tool life in hot machining using artificial neural networks and regression analysis method", Journal of Materials Processing Technology, vol. 124, no. 1, pp. 99-104, 2002.

[Vijayakumar 2006] S. Vijayakumar and S. Schaal, "Approximate nearest neighbor regression in very high dimensions," Nearest-Neighbor Methods in Learning and Vision: Theory and Practice, MIT Press, Cambridge, MA, USA, pp. 103-142, 2006.

[Witten, 2005] Witten, I.H. and Frank, E. 2005, "Data Mining: Practical Machine Learning Tools and Techniques", Morgan Kaufmann, 2nd ed. http://www.cs.waikato.ac.nz/ml/weka

[Wu 2008] X. Wu, V. Kumar, J. R. Quinlan, J. Ghosh, Q. Yang, H. Motoda, G. J. McLachlan, A. Ng, B. Liu, S. Y. Philip et al., "Top 10 algorithms in data mining," Knowledge and Information Systems, vol. 14, no. 1, pp. 1-37, 2008.

[Yu, 2009] Yu, J., Xi, L. and Zhou, X. 2009, "Identifying source(s) of out-of-control signals in multivariate manufacturing processes using selective neural network ensemble", Engineering Applications of Artificial Intelligence, vol. 22, pp. 141-152.

[Zunkler et al. 1985] B. Zunkler, 1985. Thread rolling torques with through rolling die heads and their relationship to the thread geometry and the properties of the workpiece material, Wire, Vol. 35, pp. 114117.

Figures and tables

NEW TAP 3000 THREADS 5000 THREADS

Figure 2. Tap profile and wear on a new tap and alter 3000 and 5000 threads.

Figure 3. Forming lobe wear: Left) tap rotation; Right) Wear on the relief (lower) and rake (upper) flanks.

Figure 4. Evolution of "Type 1" forming tap wear.

B 0,5 .o

Figure 5. Evolution of "Type 2" forming tap wear.

1000 threads

. .■■lull ll

1 5 10 15 20 25 30 35 40 Forming lobe number

Forming lobe number

Figure 6. Evolution of "Type 3" forming tap wear.

1.8 1.6 1.4 1.2 I 1

0.6 0.4 0.2 0

D □ □ □ □ □ □

x n □ □

□Ö ¿a

Lobe of the tap

X .. X Öx e

□ □ □

mFPTFn MAMMCrpiPT

1.6 1.4 1.2 1

1.8 1.6 1.4 1.2 1

Type of forming taps

1000 1500 2000 2500 3000 3500 4000 4500 5000 Number of threads

Figure 7. Scatter plots for each predictive variable and the upper length.

Table 1. Variables, units and ranges used to generate the dataset.

Variable [Units] Input / Output Range [number of zeros]

Type of forming taps Input 1, 2, 3 (Type 1, 2 and 3)

Lobe of the tap Input 1 - 25

Number of threads Input 1,000 - 5,000

Upper length [mm] (on rake side) Output 0 - 1.17 [131]

Lower length [mm] (on relief side) Output 0 - 0.76 [123]

Total length [mm] Output 0 - 1.78 [109]

ACCEPTED MANUSCRIPT

Table 2. Dataset variation of the outputs and RMSE for the 2 proposed baselines techniques.

Lower length [mm] Upper length [mm] Total length [mm]

Mean value 0.12 0.15 0.27

Maximum 0.77 1.17 1.78

Minimum 0 0 0

RMSE Naïve approach 0.14 0.20 0.30

RMSE Linear approach 0.12 0.16 0.23

Table 3. Methods Notation

Bagging BG

Iterated Bagging IB

Random Subspaces RS

AdaboostR2 R2

Additive Regression AR

Rotation Forest RF

REPTree RP

M5P Model Tree M5P

Support Vector Regressor SVR

Multi-Layer Perceptron MLP

k-Nearest Neighbor Regressor kNN

Table 4. RMSE values for all the tested data-mining techniques

Method Lower length RMSE Upper length RMSE Total length RMSE

Naive approach 0.14 0.20 0.30

Linear approach 0.12 0.16 0.23

SVR Linear 0.09 * 0.17 * 0.22 *

SVR Radial 0.09 0.13 * 0.16

R2 L M5P (P) 0.10 * 0.13 * 0.15

R2 L M5P (U) 0.10 * 0.13 * 0.15

R2S M5P (U) 0.09 * 0.13 * 0.15

R2 S M5P (P) 0.09 * 0.13 * 0.15

R2 E M5P (P) 0.10 * 0.13 * 0.15

R2E M5P (U) 0.10 * 0.13 * 0.15

R2 E REPTree (P) 0.12 * 0.17 * 0.23 *

R2 E REPTree (U) 0.10 * 0.17 * 0.20 *

R2 S REPTree (U) 0.10 * 0.14 * 0.18 *

R2 S REPTree (P) 0.11 * 0.17 * 0.21 *

R2 L REPTree (P) ACCEPTED MAN0JSCRIPT 0.21 *

R2 L REPTree (U) 0.10 * 0.16 * 0.20 *

MLP 0.10 * 0.13 * 0.16

Knn 0.09 * 0.14 * 0.18 *

M5P (P) 0.09 * 0.13 0.16 *

M5P (U) 0.09 * 0.13 0.16 *

REPTree (P) 0.10 * 0.18 * 0.21 *

REPTree (U) 0.10 * 0.17 * 0.21 *

BG REPTree (P) 0.09 * 0.17 * 0.19 *

BG REPTree (U) 0.09 * 0.15 * 0.17 *

BG M5P (P) 0.09 0.13 * 0.16 *

BG M5P (U) 0.09 0.13 * 0.16 *

RF REPTree (P) 0.09 * 0.15 * 0.19 *

RF REPTree (U) 0.08 0.13 0.16

RF M5P (P) 0.09 0.13 * 0.16 *

RF M5P (U) 0.09 0.13 * 0.16

IB REPTree (P) 0.10 * 0.16 * 0.18 *

IB REPTree (U) 0.10 * 0.15 * 0.18 *

IB M5P (P) 0.09 0.13 * 0.16 *

IB M5P (U) 0.09 0.13 0.16

AR M5P (P) 0.09 * 0.13 0.15

AR M5P (U) 0.09 * 0.12 0.14

AR REPTree (P) 0.10 * 0.17 * 0.20 *

AR REPTree (U) 0.12 * 0.16 * 0.20 *

RS 50% M5P (P) 0.10 * 0.16 * 0.20 *

RS 50% M5P (U) 0.10 * 0.15 * 0.20 *

RS 50% REPTree (P) 0.10 * 0.16 * 0.21 *

RS 50% REPTree (U) 0.10 * 0.16 * 0.21 *

RS 75% REPTree (U) 0.10 * 0.16 * 0.21 *

RS 75% REPTree (P) 0.10 * 0.16 * 0.21 *

RS 75% M5P (P) 0.10 * 0.16 * 0.20 *

RS 75% M5P (U) 0.10 * 0.15 * 0.20 *

Highlights

• Analysis of the shape and geometry of the best roll taps for cold forged Steel, concluding useful features.

• Study of influence of metal forming in the close area to thread made by roll tapping.

• Careful study of worn areas on forming edges.

• A new study about a not very well-known threading process.

• A data mining approach for the best modelling of experimental results.