SETUP GENERATION USING NEURAL NETWORKS

The article presents an unsupervised learning algorithm that groups technological features in a setup for machining process. Setup generation is one of the most important tasks in automated process planning and in fixture configuration. A setup is created based on approach direction of the features. The algorithm proposed in this work generates a neural network that determines the setup each feature belongs to, and the number of setups generated is minimal. This algorithm, unlike others, is not influenced by the order of the input sequence. Parallel implementation of the algorithm is straightforward and can significantly increase the computational performance. UDC Classification: 001.8; DOI: http://dx.doi.org/10.12955/cbup.v5.1090


Introduction
The last 20 years have seen many works on automated process planning.Particular attention has been paid to fixture design and the setup and ordering of features (Joneja et al., 1999).Setup planning deals with the grouping of features into setups in proper sequences, sequencing the setups, and choosing machines, tools, and fixtures.Setups are used to group similar features that are processed at each step of the process.Setup generation done during the fixture design phase is aimed at determining the axial orientation and position of the grouped features as well as the cutting tools required to manufacture them (Rong, 2007).The setup generation phase takes into account both the requirements of the process planning and fixture design (Stampfer,2009).Setup generation in Sakurai (1992) uses the location tolerance of the workpieces.Other CAPP systems (Delbressine et al., 1993) employ tolerances of the workpieces as criteria for setup generation and sequencing.

Neural network generation
In recent years many researchers have employed the neural network approach to solve setup generation problems (Pao et al., 1993, Westhoven et al., 1992).The standard approach is to either use different relationships between the approach directions in the given features, or based on approach direction and another factor such as tool commonality.Such algorithms do not always give optimal results for real technological processes.The number of generated setups strongly depends on the input features sequence.The algorithm proposed in this work generates a neural network that determines the setup each feature belongs to, and the number of setups generated is minimal.The proposed unsupervised neural net can be viewed as modification of the one in Chen (1993) and solves the problem of the input feature sequence.An approach direction is a straight path that gives a tool an unobstructed access to the feature in the workpiece.Some features may have more than one approach direction.A neural network consists of clusters and every cluster represents a set of features that have at least one approach direction in common (Chen, 1993).We say that two clusters have a common approach direction if all features of the two clusters combined have a common approach direction.The net under consideration has a leveled structure.Each level of the net consists of a number of clusters and the clusters of this level may or may not share a common approach direction.One can merge clusters with common approach directions to form new cluster on the next level of the net.The idea behind the algorithm is to create the next level by merging clusters and thus producing fewer clusters with fewer common approach directions.Eventually the net reaches a level where any two given clusters do not share a common direction.This last level will determine the number of setups and their layout.

Self-Organized net generation algorithm
Let k be the number of different approach directions.For each feature i F we associate the vector . In 1170 other words, i B represents the bit mask of the allowed approach directions.It can also be treated as a binary number or as a k-dimensional vector of zeros and ones.
The neural network consists of k levels numbered 1 , , 1 , 0  k  .Clusters are denoted as j i A -the j th cluster on the i th level of the net.Each cluster in the net possesses internal memory in the form of a single integer value which we denote j i a .Let this cluster represent the features


, where & is the binary AND operator.In this way the 1s in the binary representation of j i a indicate the approach directions that are common to all the features in this cluster, and the 0s represent approach direction that are not allowed for at least one feature in the cluster.Clearly two clusters, say  (Chen C.L. P., 1993).The weight functions are determined as each cluster sends its internal memory across the newly formed connection.Each weight function assumes a nonzero value equal to the value sent from

we also say that i p
A is being merged with some other clusters from that level to produce j p A 1  ) and is equal to zero otherwise.After the construction of the net is complete the internal memory of the clusters on the last level will tell the number of setups and the features that belong to each one of them.
At any time the value j i a -in binary form -gives the allowed approach directions for this cluster.A new cluster is created at level A is determined to have approach directions incompatible with any existing cluster at level 1  j .The internal memory of this newly created cluster is initially set to j i a .Clusters are grouped based on the similarity of the approach directions they represent.This "similarity" is expressed by the function ) , ( When the two features (or clusters) i F and j F have at least one common direction then ) , (   g returns the number of directions that are acceptable for i F and are not acceptable for j F and vice versa - acceptable for j F and unacceptable for i F .It appears that the distance between i F and j F is simply the square of the usual Euclidean distance between the vectors i B and j B that is however, i F and j F do not share a common direction then   g and as will be shown below, this does not allow for the two clusters to be grouped (in practical implementations one simply sets ) , (   g to a sufficiently big integer).The distance between two features (or clusters) depends not only on the 1171 shared approach directions, but also on the ones that are incompatible.The higher the value of g the more incompatible approach directions the two clusters have.The structure of the net is shown in Figure 2. The description of the algorithm is inductive with respect to the number of levels.The order in which new clusters are created on any given level is not important.We suppose that level p is created and show how to create the next level p+1.For the sake of simplicity we assume that the clusters from level p are read in the order of their creation.By reading an input cluster we mean that its internal memory is sent to the next level.
1.The generation of the net begins with the creation of level 0: All input features Go to step 6. f.Step 6.If there is no further input from level p, then level p+1 is constructed.STOP.
Otherwise go to step 2.
3. The generation of the net is completed when level Suppose that when the last level 1  k (remember that k is the number of different approach direction and we have k levels - These clusters determine the number of setups -l, as well as the approach directions for each setupthe bitmask of the internal memory for any two clusters on that level, and thus any subsequent level k s , , 1   will have the same clusters as level s.If this happens it is not necessary to build any subsequent levels.However each time we generate a new level we should make the comparisons in addition to all other operations involved to check for this condition, while building all levels up to number k eliminates these additional computations.
To determine the setup to which a feature i F belongs to one proceeds in the following way: starting from cluster i A 0 one follows the one link that has a non-zero length and reaches level.Then one follows the only non-zero weighted link that leads to level 2. The procedure is repeated until the last level 1  k is reached.The cluster that we reach on that last level is the required setup.

Testing of the neural network
In order to test the neural network two benchmark parts were designed.The first part to be tested is shown in Figure 3.It has four different features, and the approach directions are shown in Table 1.
The structure of the net with the appropriate weight functions is shown in Figure 4.It is constructed in the following way: 1. Level 0 of the net consists of four clusters - e.The input from the previous levellevel 0is not empty.So, we repeat again steps 2 through 5.
f. Level 1 is completed when 4 0 A is created.
3. Finally a network of five levels is generated.The last level consists of two clusters, namely 1 5 a =[010010] representing features F 1 , F 2 and F 3 , and cluster 2 5 a =[001001] that represents only F 4 .Although levels 3, 4 and 5 have the same number of clusters, we generate all levels up to 5 (the number of different approach directions is 6).A simple inspection of Table 2 reveals that the minimal number of setups for this part is 2, and this was the result of the algorithm.The second part that was tested is shown in Figure 5 and has 12 features.The algorithm was tested for various ordering options of the input features.In all cases the number of clusters at the final level is two -1 12 A (representing features F1, F2, F3, F4 and F12) and 2 12 A (representing features F5, F6, F7, F8, F9, F10 and F11).Again, two is the minimal number of setups possible for the part.It is clear that each cluster generated on level 0 and level 1 is initialized once and then its internal memory is not changed.Indeed, level 0 does not involve any grouping.Clusters on level 1 are grouped only if the distance between the clusters from level 0 is 0, that is the bit masks of the clusters being grouped are the same.This means that these clusters on level 1 and 2 can be formed as soon as the clusters from the previous level become available.Another possibility for parallelism becomes available when a given j i a (from any level i) assumes a bit mask that has a single 1.If this is the case, any clusters that are grouped to j i A should have at least one common approach direction and since j i A has only one allowed direction (the bit mask contains a single 1) its internal memory will not change.Under such circumstances the 1174 value of j i a can be used to begin construction of the next level -i+1 before level i is entirely generated.Other situations where parallelism is possible can also be identified but they require additional comparison operations between clusters and are not presented here.Conclusions Setup design is an important part of the automated process planning and fixture configuration.The proposed algorithm for setup generation is based on unsupervised neural network.It provides minimal number of setups when the approach directions are used as a criterion.The number of setups that the network generates, unlike (Chen C.L. P., 1993)is independent of the input ordering of the features.A parallel implementation of the algorithm is straightforward and can significantly increase the computational performance.This is important for details with a large number of features.
clusters can send the value of their memory j i a across each connection takes two k-digit binary numbers i B and j B as arguments.It is defined as follows: (  ,   ) = { (     )       ≠

Figure
Figure 1: Binary operation XOR (exclusive or)

Figure
Figure 3. First part.
1 is built according to steps 2 through 5 of the algorithm.Here is an outline of the execution path (we have p=0):

Figure 4 :
Figure 4: The structure of the net.Figure5.Second part.

Table 1 :
Features and approach directions of a sample part.