118
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Minimum-Cost Ordering for Selective Assembly

Received 20 Feb 2023, Accepted 18 Apr 2024, Accepted author version posted online: 26 Apr 2024
Accepted author version

Abstract

We consider the minimization of input cost for a selective assembly system which features two random inputs and a finite number of matching classes. This setup frequently arises in high-precision manufacturing when input tolerances are not tight enough for the required output precision. We determine optimality conditions for the cost-optimal input portfolio given an expected-output target, first using a normal approximation of the multinomial binning distribution, and second employing a simple upper envelope of the output objective. We show that the relative error tends to zero as the production scale becomes sufficiently large. The envelope optimization problem also yields a closed-form solution for the cost-minimizing inputs as well as total costs, which are easy to understand for managers. A numerical study tests the practicality of the envelope solution. The latter can be used as seed for a numerical solution of the exact problem, as well as a closed-form approximate solution which allows for an analysis of structural properties.

Disclaimer

As a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also.

1 Introduction

By sorting low-precision inputs into predefined tolerance classes, so as to obtain a grade-specific matching for components of different types, it is possible to produce relatively high-precision output. This process of “selective assembly” has been known for more than 100 years—as part of interchangeable manufacturing. While much of the literature thus far has been concerned with optimizing the design of the matching bins, the available work on cost-effectively procuring the inputs for a given system of matching bins with arbitrary distributions of part characteristics is rather sparse. This paper provides such an analysis, characterizing the optimal order quantities for two different part types, together with a closed-form approximation of the optimal input quantities. The latter allows for structural insights and suggests a straightforward generalization to an arbitrary number of part types.

In the batch production of movements for mechanical watches, the oscillator—assembled from a hairspring and a balance wheel—must exhibit a well-defined resonance frequency within a small tolerance. 1 The accuracy of this output characteristic is key for the quality of the watch, and this is why many high-end horology companies have adopted a system of selective assembly, sorting hairsprings and balance wheels into different grades, so as to guarantee a sufficiently high clocking precision after joining any two components with matching grades. Naturally, when using selective assembly the probability that, say, of 100 hairsprings not all can be matched with 100 balance wheels is generally positive, and this mismatch likelihood increases in the number of matching classes. That is, a greater output precision comes at the price of a bigger loss of input components. Finding the cost-optimal input quantities needs to trade off the cost of the inputs against the benefits of the outputs, which would vary across the different assembly grades. Given two multinomial distributions for the input parts being in the various matching classes, we characterize the cost-minimizing input quantities so as to achieve a given expected output quantity. We also show that a good approximation of the output objective (with arbitrarily small relative error for sufficiently large production lots) leads to a constant ratio between inputs and outputs, taking the production process to be a weighted sum of perfectly complementary systems. The latter may then allow for the separation of input parts into “principal components” and “buffer components.” In mechanical watchmaking, for example, hairsprings turn out to be buffer components for the significantly more expensive balance wheels; cf. Sec. 4.

1.1 Literature

The idea of sorting inputs of a given type into different bins so as to be able to interchangeably match components across corresponding bins for different part types, also termed “selective assembly,” was discussed more than a century ago by Buckingham (1921) who notes: “[t]he chief purpose of manufacturing, by selective assembly (…), is the production of large quantities of duplicate parts as economically as possible, within such limits that they may be assembled without further machining” (p. 224). Our concern here is related to the requirement of “as economically as possible,” by specifically considering the tradeoff of input costs against output benefits. The latter may naturally depend on the specific “grade” of the output, that is, on the matching class from which inputs were taken to assemble a given unit of a finished product. Mansoor (1961) points out that in practice assembly processes with more than 20 matching classes can be observed, but that one of the main problems of selective assembly is evidently the possibility for mismatch. In contrast to our input-output view, which is driven by the input-cost minimization problem and an interest in the dependence of input cost on required output precision and the number of matching classes, Caputo and Di Salvo (2019) are interested in different manufacturing methods for which they disaggregate production-specific cost parameters including processing times, sorting costs, and so forth, which are essentially invariant once the production process has been fixed, leaving open the question about the optimal inputs.

There are numerous practical applications for selective assembly systems, such as hole-and-shaft combinations (Asha et al., 2008; Kannan and Pandian, 2021), sheet metal assembly (Rezaei Aderiani et al., 2019), flat-panel display manufacturing (Duenyas et al., 1997), camshafts in the automobile industry (Mease et al., 2004), as well as bimetal thermostats, Fortini’s clutch, and knuckle-joint assembly (Tan and Wu, 2012). Mease et al. (2004) note that cost imbalances may put into question an otherwise yield optimized selective assembly system. In the case where input prices differ substantially, as for hairsprings versus the at least 10-times more expensive balance wheels, the less costly parts should be ordered in excess so that the expensive parts are paired more often.

One main concern of the literature thus far has been the design of these matching classes, in the form of bins in the space of part characteristics which are often assumed to be normally distributed. For example, Pugh (1986) considers an equidistant partition of scalar part characteristics. Kwon et al. (1999) improve on the equidistant partition by determining matching classes so as to minimize a square deviation of the difference of the (assumed to be) commensurate parts from some target. While Matsuura (2011) allows for more general convex loss functions, the general problem with the loss-function approach is that it aggregates penalizations (by taking the expectation) irrespective of whether parts fit or not. In other words, under a set of matching classes which was designed using the penalization approach it is generally possible that parts do not match, even though they were taken from corresponding bins and therefore should match. We therefore follow here the robust binning approach by Weber (2021) which guarantees that parts across corresponding bins always match. This allows us to concentrate on the design of the minimum-cost input portfolio. We also mention in passing that in addition to the fixed-bin selective assembly discussed thus far there is the notion of a “direct” selective assembly which requires to keep track of every single part so as to be able to match parts one-to-one by solving a matching problem on a bipartite graph (Coullard et al., 1998). Since tracking very similar individual parts would often prove impractical and very costly in production settings with significant output volumes, we restrict attention to a selective assembly where bins are given, so that for each part type it is possible to specify a discrete probability distribution of parts being associated with a given bin. As noted earlier, we also assume that parts of different types can always be matched when taken from corresponding bins.

In Economics, the perfect complementarity of inputs for production, with transformation to outputs in constant proportions, goes back to Leontief (1941) who used it as a simplifying assumption to analyze a large economy. In assembly operations, constant proportions in the conversion of inputs to outputs arise because the required amount of input for a unit of output is typically fixed. Without any significant loss of generality, the different part types are assumed to be matched one-to-one, i.e., at equal proportions. For example, if each finished ball bearing requires 8 balls and 1 shaft, then we can think of the balls as being provided in lots of 8, so as to maintain equal proportions of inputs and outputs. Given that for any given matching class, the number of components of each type is ex-ante uncertain, the deterministic production function mapping inputs to expected outputs no longer exhibits perfect complementarity but rather a certain input substitutability, since additional quantities of type-1 parts (e.g., ‘balls’) provides an insurance against being long with type-2 inputs (e.g., ‘shafts’) in a given matching class. Thus, increasing the quantity of one input type must always lead to a nonzero increase in the expected output, despite the perfect complementarity applied to any realization of the “sorting product” (i.e., the vector of measured parts in the different matching classes). Our paper extends the work by Weber (2022) which focuses on the special case of binary random part matching, while we here allow for a selective assembly system with an arbitrary number of matching classes and also examine the scaling behavior.

2 Model

2.1 Matching Classes and Sorting Products

The selective assembly problem considered here involves components of two types (“1” and “2”) and M2 matching classes. A component of type iI={1,2} features a (random) characteristic Si, with realizations si in the (measurable) space Si which has at least M elements; see Sec. 3.7. The “matching classes” Ci,m, for mM={1,,M}, are nonempty subsets which together form a partition of Si, in the sense that(m,m̂M,mm̂Ci,mCi,m̂=)andmMCi,m=Si,

for all iI. Let(1) pi,m=(SiCi,m)(0,1)(1)

be a given probability that a given component of type i has characteristics that lie in the matching class m. To avoid trivialities, we exclude the case where pi,m{0,1}. Indeed, for pi,m=0 no successful grade-m matching is possible and the class should be suppressed. Conversely if pi,m=1, then no other matching class can have successful matches, contradicting the assumption of having at least two functional matching classes, the minimum number of classes for a meaningful selective assembly.

Two parts of characteristics s1 and s2 a referred to as “matched” (across types) if the vector s=(s1,s2)S=S1×S2 is an element of the joint matching class Cm=C1,m×C2,m, for some mM. Since by assumption the set of matching classes {Ci,1,,Ci,M} forms a partition of the space of characteristics Si, all incoming parts can be sorted into matching classes; thus, taking two parts from corresponding matching classes C1,m and C2,m always produces a “matched pair;” see Fig. 1. In practice, the characteristics of different types of parts (e.g., hairsprings and balance wheels for mechanical watches) can vary substantially and be measured in different units (cf. footnote 1). When matching parts of corresponding grades, the number of feasible assemblies for any given matching class (referred to as “grade”) corresponds to the minimum number of parts available from either type. To clarify this strict complementarity, Fig. 1 depicts a situation where the smallest matching class (for each component type) produces zero output: indeed, the two type-1 components in C1,1 cannot be matched with any type-2 components, as C2,1 is empty.

Let xi0 be the chosen input quantity for type-i components. Sorting them into matching classes yields the vector xi=(xi,1,,xi,M) (henceforth referred to as “sorting product”), where xi,m0 denotes the number of components of type i in matching class m, so thatxi=mMxi,m,iI.

The multinomial probability that a size-xi ordering of type-i components into M matching classes produces a particular sorting product xixi is(Xi=xi|pi,xi)={(mMpi,mxi,mxi,m!)(xi!),if xi=mMxi,m,0,otherwise,

where pi=(pi,1,,pi,M)(0,1)M is a given vector of positive likelihoods pi,m such that(2) mMpi,m=1,iI.(2)

The preceding normalization constraint may be relaxed to include the possibility of wasteful off-spec components (cf. Remark 5).

2.2 Normal Approximation

We use the standard normal approximation of the multinomial distribution of the random vector Xi=(Xi,1,,Xi,M) with the mean μi=(μi,1,,μi,M), where μi,m=pi,mxi, and the symmetric positive-definite covariance matrix ΣiM×M. The latter features the diagonal elements σi,m2=var(Xi,m)=xipi,m(1pi,m) and the off-diagonal elements cov(Xi,m,Xi,m̂)=xipi,mpi,m̂ for m,m̂M with mm̂. In what follows, we usually suppress the dependence on pi for simplicity, as it is not a decision variable. Under the proposed normal approximation,(Xi,mξ|xi)Gi,m(ξ|xi)=Φ(ξμi,mσi,m),ξ[0,xi],

denotes the cumulative distribution function (cdf) for the random number Xi,m of type-i components in the m-th matching class, where the cdf of the standard normal distribution isΦ(ξ)=ξϕ(ζ)dζ=12[1+erf (ξ/2)],ξ,

with probability density function (pdf)ϕ(ξ)=exp(ξ2/2)2π,ξ.

Clearly, the functions Gi,m(·|xi), for mM, constitute marginal distributions of the joint cdf that describes the random sorting product of type-i components under the normal approximation, but it turns out that they are all that is required for our purposes.

Remark 1 (Approximation Error).

By the Berry-Esseen theorem (Berry, 1941; Esseen, 1942) the absolute deviation of the marginal distribution Gi,m from the underlying binomial distribution can be bounded as follows:supξ[0,xi]|(Xi,mξ|xi)Gi,m(ξ|xi)|(pi,m2+(1pi,m)2σi,m)κ,

for all (i,m)I×M, with κ=0.4748 (Shevtsova, 2011), which lies above the theoretical lower bound for κ of (10+3)/(62π)0.4097 (Esseen, 1956). Hence, as long asxi>(max{pi,m,1pi,m}min{pi,m,1pi,m})ν2,ν0,

the approximation error corresponds to the probability of ‘extreme events,’ further than ν standard deviations away from the mean.

Remark 2 (Continuous Input Values).

The normal approximation allows for nonnegative input quantities xi, lifting the burden of the integer constraint for the purposes of optimization. A practically implementable input vector can then usually be found ex post by rounding to the nearest integer or the nearest feasible batch size.

2.3 Matching Output

Given its type-1 and type-2 inputs, the firm produces the (sorted) output vector Y=(Y1,,YM). The m-th random matching output Ym, generated by using components with joint characteristics in matching class Cm, is(3) Ym=min{X1,m,X2,m},mM.(3)

It is evident that the intrinsic value of the firm’s output may not be the same for all matching classes. For example, the matching-class index m can denote different quality grades in which case high-quality output would yield a higher price than low-quality output, without any difference in the unit cost for each random input component. To capture the differentiation of benefits across matching classes, let(4) Q=η·Y=mMηmYm(4)

denote the firm’s total (weighted) output, where η=(η1,,ηM) is a vector which contains the net unit value ηm>0 for the m-th matching output; see Fig. 2. The net unit value can be viewed as an absolute or a relative measure of benefits. When absolute, it may incorporate the revenues from selling the assembled grade-m product minus production costs, whereas when relative it serves to quantify a normalized net benefit. In the latter case, the normalizations η1++ηm{1,M} are commonly used; for instance, by setting ηm1 one obtains a simple physical output count, Q=Y1++Ym, relevant for many (if not most) manufacturing applications. The assumption that ηm is strictly positive reflects the firm’s concern for all units and its option of “free disposal.” If a certain matching class (e.g., m=1) represents “junk” components, then it is possible to reduce that weight reflecting any possible scrap value.

2.4 Expected Output

The need for grade-specific component matching renders the firm’s production stochastic, allowing generically for the possibility of even zero production output (with positive probability), irrespective of the chosen input quantities. While this may be true in principle, of course the likelihood of such poor output realizations becomes negligible in any realistic setting. For example, the likelihood of observing a type-i quantity of two standard deviations below the expected quantity in any given matching class is below 5%; for three standard deviations it is below 0.3%.

Since the firm is risk-neutral, its focus lies on its expected total (weighted) output,(5) F(x)=E[Q|x]=mMηmE[Ym|x]=mMηmFm(x),x+2,(5)

which is a deterministic function of the corresponding input x=(x1,x2), where the total output F(·) is obtained as the weighted sum of the expected grade-m outputs Fm(·). For any given m, the expected grade-m output depends on the statistical properties of the difference, Δm=X2,mX1,m, referred to as the (grade-m) “parts defect,” which has the meanδm=μ2,mμ1,m,

and the standard deviationσm=σ1,m2+σ2,m2.

The expected output in any given matching class can be expressed as a convex combination of the expected type-1 and type-2 quantities, minus a positive loss term that increases superlinearly in the aforementioned standard deviation of the parts defect.

Lemma 1 (Grade-m Output).

Given any mM, the expected grade-m output is given by (6) Fm(x)=Φmμ1,m+(1Φm)μ2,mϕmσm,(6)

for all x+2, where Φm=Φ(δm/σm) denotes the probability that the grade-m part defect reaches one standard deviation, and ϕm=ϕ(δm/σm) is the probability density at that point.

When the expected grade-m parts defect δm is positive (resp., negative), then the weight Φm[0,1] placed on the expected number μ1,m of type-1 parts decreases (resp., increases) in σm, since with an increased coefficient of variation for parts defects the expected number of type-2 parts increases (resp., decreases) in relative importance. In other words, perhaps counterintuitively, the nominally expected matches (as measured by the convex combination of the expected type-specific yields μ1,m and μ2,m) goes up in the dispersion of the grade-m parts defect. However, this effect is offset by an increased “matching loss” σmϕm: independent of the sign of δm, the latter grows in σm superlinearly, for ϕm>0 also grows in σm.

Lemma 2 (Monotonicity).

Given any mM, the expected grade-m output is increasing, so (xx̂andxx̂)Fm(x)<Fm(x̂),

for any two input vectors x,x̂+2.

By increasing the amount of any input, the expected output increases over all matching classes. By increasing the type-i input from xi to x̂i>xi the grade-m distribution Gi,m(·|x̂i) first-order stochastically dominates the grade-m distribution Gi,m(·|xi). Since first-order stochastic dominance is preserved by taking the minimum across the component types, the grade-m output Ŷm (with xi in x) first-order stochastically dominates the output Ym (with x̂i in x̂), all else equal, which in turn implies the (strict) monotonicity of the firm’s total output in its inputs.

3 Minimum-Cost Selective Matching

3.1 Problem Statement

Each unit of a type-i input is procured at the cost ci>0, which implies that the firm is interested in minimizing the cost of generating a desired (expected) total output q>0. Thus, given the cost vector c=(c1,c2), the firm’s (minimum-cost) selective matching problem is to determine the optimal input x*(q)=(x1*(q),x2*(q)) (also referred to as its “output-driven demand”) which achieves its minimum cost,(*) C(q)=minx+2{c·x},subject to:F(x)q,(*)

for any output q>0. Clearly, the solution to the selective matching problem does not depend on the individual input costs, but only on their ratio γ=c1/c2.

3.2 Optimal Input

The firm’s output driven demand (or optimal input) x*(q) minimizes the cost of its inputs to achieve a given output objective q>0. It will come as no surprise that at the optimum the expected total output must exactly equal the output objective. This “output efficiency” is a consequence of the strict monotonicity of expected output in its inputs and the fact that inputs are costly at the margin. In addition, the technical rate of substitution between the two inputs must, at the optimum, be equal to the cost ratio γ. This provides optimality conditions for the otherwise nonconvex optimization problem ( ), summarized by the following result.

Theorem 1 .

For any expected total output q>0, a solution x*(q) to the minimum-cost selective matching problem ( ) satisfies output efficiency, (7) F(x*(q))=q;(7)

in addition, at x=x*(q) the marginal rate of technical substitution between type-1 and type-2 components is equal to the cost ratio γ=c1/c2, so (8) mMηmp1,m(Φm1p1,m2σmϕm)mMηmp2,m(1Φm1p2,m2σmϕm)|x=x*(q)=γ,(8)

where Φm and ϕm, for all mM, are specified in La. 1.

Remark 3 (Nonconvexity).

Generally, the output objective F(x) is can be nonconcave in the vector of inputs x. Indeed, in the simple case of two grades, where the firm only cares about grade-1 output, it is M=2 and η=(η1,η2)=(1,0). The corresponding Hessian of F is:detD2F=det[x1x12Fx1x22Fx1x22Fx2x22F]=p1,12p2,12(2p1,1p2,1)28σ14(x)ϕ2(p2,1x2p1,1x1σ1(x))<0,

for all x=(x1,x2)++2. The preceding determinant of the Hessian is equal to the product of its eigenvalues, which must have therefore different signs. As a consequence the output objective is nonconvex in this case. Thus, the solution to the optimality conditions (7)–(8) may not be unique. As shown in Sec. 3.4, a simplified approximate output objective may yield a reasonable approximate solution (in closed-form).

3.3 Envelope Approximation

Instead of the normal approximation of the expected output F we now consider the envelope output,(9) F¯(x)=mMηmmin{μi,m:iI}=mMηmmin{pi,mxi:iI},(9)

which is well-defined for all x+2. The envelope output F¯ is an upper bound for F.

Lemma 3 .

F(x)F¯(x), for all x+2.

As the positive linear combination of minimum functions is concave, Jensen’s inequality implies that the expectation of the minimum is weakly smaller that the minimum of the expectations. An important question that arises of course is how far, for a given input x, the envelope F¯(x) output may deviate from the reference output F(x)? We thus consider the absolute deviation,R(x)|F(x)F¯(x)|=F¯(x)F(x)0,x+2,referred to as the envelope approximation error. The next result shows that this error increases only sublinearly with the inputs.

Lemma 4 (Approximation Error).

For any x++2 it is R(x)σ(x)12π,

where σ(x)=mMηmσm(x).

In other words, the envelope approximation error is at most proportional to the weighted sum of the grade-specific standard deviations σm(x)=p1,m(1p1,m)x1+p2,m(1p2,m)x2, using the given output weight ηm, for each grade mM. Note also that the preceding error bound is tight when ηmconst., pi,m1/2, and x1,mx2,m. In a heterogeneous setting, the derived bound may be conservative as it presumes that p1,mx1=p2,mx2 holds for all mM; cf. Weber (2022). While the absolute deviation between F and F¯ may grow with the size of the inputs, in terms of the relative error,r(x)=R(x)F(x),x++2,

we obtain the following strong global approximation result.

Lemma 5 (Relative Approximation).

For any ε(0,1), there exists a finite minimal envelope output q¯(ε), which is such that: 2F¯(x)q¯(ε)r(x)ε.

For small ε>0, the minimal envelope output q¯(ε) is O(ε2).

Instead of a minimal envelope output q¯(ε) it is possible to specify a minimal input x¯(ε), which is also O(ε2)), since by the homogeneity of F¯ it is proportional to q¯(ε); cf. footnote 2 . Then min{x1,x2}x¯(ε) also implies r(x)ε. A key driver of the relative approximation result in La. 5 is that, as a consequence of La. 4, the given upper bound of the relative error decreases with the inverse square-root of the minimal input (i.e., with (min{x1,x2})1/2). The given lower bounds for input or output are fairly conservative. In practice, it is preferable to perform an error correction based on the relative error at the solution to an envelope optimization problem.

3.4 Envelope Optimization

Based on the upper bound F¯ for the production function introduced in Sec. 3.3, we now consider the envelope optimization problem,(**) C¯(q)=minx+2{c·x},subject to:F¯(x)q,(**)

for any given target output q>0. By construction, this optimization problem is convex, and in order to find a solution to ( ), which will be referred to as an optimal envelope input x¯*, it is useful to combine all grades which have the same ratio of likelihoods (across component types),(10) {ρ:{1,,L},ρ1<<ρL}={p1,mp2,m:mM},(10)

to obtain LM combined grades, where each such combined grade is characterized by a distinct likelihood ratio ρ. We can then rewrite the envelope output in Eq. (9) as follows:(11) F¯(x)==1Lη̂min{ρx1,x2},(11)

whereη̂=mM1{ρ=p1,m/p2,m}ηmp2,m,{1,,L},

are the corresponding combined weights. The alternative representation of the envelope output in Eq. (11) allows for a closed-form representation of the solution to the envelope optimization problem ( ).

Theorem 2 .

For any q>0, a solution to the envelope optimization problem ( ) is (12) x¯*(q)=ξ¯*q,(12)

where each ξ¯=(ξ¯1,ξ¯2), for {1,,L}, denotes a vertex of the iso-output contour for a unit output, with ξ¯1=(k=1Lη̂kmin{ρk,ρ})1andξ¯2=ρξ¯1.

The optimal combined grade, *=max{{1,,L}:s<γ}, determines the critical vertex ξ¯*, where the iso-output contour slopes s1,,sL are such that 0=s1<s=(ξ¯2ξ¯21)/(ξ¯11ξ¯1),

for all {2,,L}.

The optimal envelope solution x¯* is such that the slope of the iso-cost curves lies in the subgradient, i.e., γF¯(x¯*), and at the same time envelope output efficiency holds, in the sense that F¯(x¯*)=q. The following example illustrates the result.

Example 1 .

Consider a selective-assembly system with M=5 matching classes featuring the matching-probability vectors p1=(0.4,0.2,0.1,0.1,0.2) for parts of type 1 and p2=(0.2,0.1,0.1,0.2,0.4) for parts of type 2. The unit-cost vector is c=(3,1), so γ=3. Since the set of likelihood ratios, {p1,m/p2,m:mM}={1/2,1,2}, features only L=3 distinct elements, by Eq. (10) the relevant likelihood ratios (in increasing order) are ρ1=1/2, ρ2=1, and ρ3=2. Given a uniform output-weight vector η=(1,1,1,1,1), the envelope output can be written as in Eq. (11), with the combined weights η̂1=0.6 (for ρ1=p4,1/p4,2=p5,1/p5,2), η̂2=0.1 (for ρ2=p3,1/p3,2), and η̂3=0.3 (for ρ3=p1,1/p1,2=p2,1/p2,2):F¯(x1,x2)=0.6min{x1/2,x2}+0.1min{x1,x2}+0.3min{2x1,x2}.

The corresponding unit-output vertices are ξ¯1=(2,1), ξ¯2=(10/7,10/7), and ξ¯3=(1,2), resulting in the iso-output contour slopes s1=0<s2=3/4<s3=4/3. Comparing the latter with the cost ratio γ=3, we find *=max{{1,2,3}:s<γ}=3, so that ξ¯3 becomes the critical vertex. By Thm. 2, the solution to the envelope optimization problem ( ) is therefore x¯*(q)=(1,2)q=(q,2q), for any q>0. Fig. 3 depicts the corresponding situation comparing, for q=100, the iso-F contour with the iso-F¯ contour. We also note that to *=3 there correspond two critical grades, m*{1,2}. The solution to the envelope optimization problem is consistent with minimizing input cost only with respect to the critical grades.

Remark 4 (Alternative Representation).

From a computational viewpoint, instead of determining the set {ρ1,,ρL} from the likelihood ratios p1,m/p2,m, it is usually more convenient to compute F¯(x¯m(q)) for all M candidate solutions x¯m(q)=(x¯1m(q),x¯2m(q)), with(13) x¯im(q)=(kMηkmin{pj,kpj,m:jI})1qpi,m,(i,m)I×M,(13)

instead of using Eq. (12), and to then select the optimal envelope input as the cheapest of these, so x¯*(q)=x¯m*(q), with(14) m*argminmM{c·x¯m(1)}.(14)

The critical grade m* does thereby not depend on the magnitude of the output, since both the cost and the envelope output function are homogeneous (of degree 1).

Example 2 .

In the setting of Ex. 1, we find x¯1(1)=x¯2(1)=(1,2), x¯3(1)=(10/7,10/7), and x¯4(1)=x¯5(1)=(2,1). Since c·x¯1(1)=5<c·x¯3(1)=40/7<c·x¯5(1)=7, this implies the critical grades m*argminmM{c·x¯m(1)}={1,2}, and thus the solution to the envelope optimization problem, x¯*(q)=qx¯m*(1)=(q,2q), for all q>0, as before.

3.5 Approximate Solution

By La. 3 the envelope output F¯ exceeds the expected output F. For any target output q>0, output efficiency (i.e., the fact that F¯(x¯*(q))=q) means the solution x¯*(q) to the envelope optimization problem ( ) is not a feasible point of the original selective matching problem ( ). However, since the envelope objective is homogeneous (of degree 1), it is enough to scale the optimal envelope input to achieve feasibility, leading to an approximate solution to the selective matching problem, 3(15) x̂*(q)=[qF(x¯*(q))]x¯*(q)=x¯*(1)q2F(x¯*(q)),(15)

which in itself is sublinear in q (as F is convex in q due to a risk-pooling effect). Consequently, the approximate cost function,(16) Ĉ(q)=(kMηkminiI{pi,kpi,m*})1(iIcipi,m*)q2F(x¯*(q)),(16)

is an upper bound for C(q) in ( ), where the critical grade m* is determined by Eq. (14). On the other hand, the optimal envelope cost, C¯(q)=c·x¯*(q), must constitute a lower bound for the optimal cost, so(17) C¯(q)c·x¯*(q)C(q)c·x*(q)Ĉ(q)c·x̂*(q),(17)

for all q>0. At the approximate solution x̂*(q), the relative output error becomes(18) ε(q)=F¯(x̂*(q))F(x̂*(q))F(x̂*(q))=q2F(x¯*(q))·F(x̂*(q))1qF(x¯*(q))1,(18)

since, by construction, F(x̂*(q))q. We note that the term on the right-hand side of the preceding inequality (18) also bounds the relative cost overage,(19) Ĉ(q)C(q)C(q)=c·x̂*(q)c·x*(q)1qF(x¯*(q))1,(19)

where we have used Eqs. (15) and (17). Based on the preceding inequality, it is now possible to provide an a priori performance guarantee for the relative cost overage which depends only on the desired level of production output.

Lemma 6 (Relative Cost Overage).

For any target output q>0, the relative cost overage is such that (20) Ĉ(q)C(q)C(q)(21pminπpmin)1q,(20)

where pmin=min(i,m)I×M{pi,m}>0.

In particular, the relative cost overage is O(q1/2), in the sense that it is inversely proportional to the square root of the target output. For example, if pmin=10%, the upper bound for the relative cost overage (corresponding to the right-hand side in Eq. (20)) becomes (6/π)/q(3.3851)/q; this conservative bound drops below 10% for target outputs of 1,146 units and beyond. The latter value corresponds to the input bound implied by La. 5; cf. footnote 2 .

Example 3 .

In the setting of Exs. 1 and 2, the solution to the envelope optimization problem, x¯*(q)=(q,2q), produces the expected output F(x¯*(q)) of about 94.6345 for q=100 (resp., an expected output of about 983.2032 for q=1000). Accordingly, the approximate solution of the selective matching problem is x̂*(100)=[100/94.6345](100,200)(105.67,211.34) with relative output error ε(100)5.499%5.67% (resp., x̂*(1000)(1017.1,2034.2) with ε(1000)1.692%1.71%). By Eq. (19) the upper bound of 5.67% (resp., 1.71%) also majorizes the relative cost overage incurred by the approximate solution x̂*(q) to the selective matching problem compared to the optimal solution x*(q), for q=100 (resp., for q=1000).

3.6 Cost of Selective Matching

To gauge the cost of selective matching as a function of the grade cardinality M=|M|, we consider a symmetric setting with maximum entropy, where all grades are attained with equal probability (i.e., pi=(1/M,,1/M) and γ=1), and where the output weights and the unit costs are all equal (i.e., (η,γ)=1). Assuming a symmetric input x=(N,N) for some NM, La. 1 yields the expected grade-m output,Fm(N,N)=NMNπM(11M),mM.

Summing over all matching grades mM, with F(N,N)=MFm(N,N)) yields the (symmetric) relative output loss for selective-matching,L(N,M)=NF(N,N)N=M1πN1π,

which is increasing (sublinearly) in M2 and decreasing in NM. Since by symmetry Eq. (8) in Thm. 1 is automatically satisfied, the cost-minimizing input quantity N*(q) for both type-i inputs, given a desired output qM, is entirely determined by the output-efficiency condition (7),N*(q)=q+M12π(1+1+4πqM1),qM.

This output-driven demand leads to a relative input overage,O(q)=N*(q)qq=M12πq(1+1+4πqM1),

which is decreasing in the target output quantity q, due to risk pooling across matching classes. The value of O(q) is also equal to the relative cost overage (i.e., O(q)=(C(q)2cq)/(2cq), for any c=(c,c) with c>0) compared with perfect matching. 4 However, this relative overage increases almost linearly in the number of excess matching classes M1.

Example 4 .

When transitioning from M=20 to M=40 matching classes, the relative overage in input or cost increases by a factor of at least 1.9279 (for q1), all else equal. For q, the factor approximates (401)/(201)2.0526. That is, for small target outputs q we observe a slight sublinear increase of the relative overage in the number of matching classes, which becomes somewhat superlinear when target outputs are relatively large.

3.7 Multi-Type Matching

When there are more than two part types (i.e., for I={1,,I} with I>2), an explicit formula for the expected output under the normal approximation is no longer available. This in turn makes it more difficult to derive solid approximation results. The corresponding I-type envelope optimization problem can still be solved explicitly. Eqs. (13) and (14) remain valid, and they determine an optimal solution x¯*(q) of the I-type envelope optimization problem (**). The re-scaling in Eq. (15) still applies and so does Eq. (16) for the approximate cost. To illustrate the behavior of the approximation error r(x¯*(q)) for I=4 compared to I=2, we can “double” Ex. 1 by setting p3=p1 and p4=p2 for the additional component types i=3 and i=4. This ensures a fair comparison when the number of component types increases by a factor of 2. Accordingly, the optimal solution to the envelope optimization problem for I=4 becomes x¯*(q)=(q,2q,q,2q) (compared to x¯*(q)=(q,2q) for I=2; cf. Ex. 2). Fig. 4(a) shows the behavior of the relative approximation error, r(x¯*(q)) (defined after La. 4), as a function of q[500,10000]. The (true) expected output is obtained (approximately) as the sample-average using the corresponding normal distribution (for N=10000 joint samples) for each matching class, according to Eq. (9). Indeed, the law of large numbers guarantees a consistent approximation of the population mean by the sample mean.

The relative error, while increasing on average by a factor of about 2.94 on the given range (decreasing in q; see Fig. 4(b)) drops below 5% shortly after passing q=1000, so that for industrial-size problems the provided approximation continues to be reasonable. A more precise treatment of multi-type matching remains an interesting topic for further research.

4 Application

Consider the production of the oscillator for a mechanical watch movement where hairsprings (i.e., type-1 components) and balance wheels (i.e., type-2 components) are mated across M2 matching classes, where M is an even number. Modern hairsprings are often made in silicon for its antimagnetic properties and are delivered in silicon wafers, with substantial variations of their stiffness across any sample. We assume that the spring coefficient S1 [in Nm/rad] is random with realizations in the interval [a1,b1]. Balance wheels are usually made from metal alloys (e.g., containing beryllium or gold among other constituents) and are characterized by the moment of inertia S2 [in kg m2]. The latter can be considered a random variable with realizations in the interval [a2,b2]. The prespecified constants ai and bi are such that 0<ai<bi<. As already pointed out in footnote 1, the key characteristic of the assembled oscillator is the oscillation period,T=2πS2/S1,

which, in itself a random variable, should have realizations close to the target period t0 for the particular movement under consideration. For example, 1/t0{2.5,3,3.5,4,5}Hz corresponds to today’s most common beat rates, BR=2(3600s)/t0 in {18000,21600,25200,28800,36000}, which are measured in half-periods (“ticks”) per hour.

4.1 Model Identification

The four sets of primitives for the selective-assembly model, (i)–(iv), are now identified in turn, first the matching classes (i), then the parameters of the multinomial distribution for the sorting products (ii), the output values (iii), and finally the unit input costs (iv).

(i) Matching Classes.

The M type-i matching classes Ci,m=[si,m1,si,m), for mM{M}, and Ci,M=[si,M1,si,M] for m=M feature M+1 breakpoints si,0,,si,M such thatai=si,0<si,1<<si,M1<si,M=bi,

for iI. According to Weber (2021), the optimal set of breakpoints, which minimize the maximum absolute deviation of T from t0, in this setup are such that the maximum matching error e is equal over all matching classes, soe=max(s1,s2)Cm|2πs2/s1t0|,mM,

where Cm=C1,m×C2,m denotes the m-th joint matching class. This approach requires a consistency condition, namely that the ratio of the endpoints β=bi/ai is constant across the component types iI, soet0=β1/M1β1/M+1.

The latter can always be achieved by increasing (resp., decreasing) one of the four endpoints by extending the range of one characteristic (or by adding a zero-value matching class). The optimal set of breakpoints is then given bysi,2k=(t0+et0e)2kai,si,2k+1={s2,2k[(2π)/(t0e)]2,if i=1,s1,2k[(t0+e)/(2π)]2,if i=2,

for k{0,,M/2} and iI, where we limit attention to the case where M is even.

(ii) Distributional Parameters.

Given a distribution of the random part characteristics, the multinomial probabilities pi,m, for (i,m)I×M, follow directly from Eq. (1) in Sec. 2.1. Alternatively, it is possible to avoid distributional assumptions about the underlying distribution of characteristics and proceed in a fully data-driven way, based on repeated observation of realizations xi(k)=(xi,1(k),,xi,M(k)) for the random sorting product Xi. This yields the maximum-likelihood estimates p̂i=(p̂i,1,,p̂i,M) of the unknown parameters pi=(pi,1,,pi,M) of the multinomial distribution,(21) p̂i,m=k=1Kxi,m(k)mMk=1Kxi,m(k),(21)

where k{1,,K} is the batch number, xi,m(k) denotes the number of grade-m components in batch k, and K the number of observed batches. That is, the maximum-likelihood estimates for the component likelihoods in their respective type-grade combinations correspond simply to their respective frequencies over all observed samples.

Remark 5 (Off-Spec Components).

Because of the empirical regularity that a received batch of input parts may well contain items that do not fit into any of the matching classes, the following question arises: What happens when relaxing the normalization in Eq. (2) to account for “off-spec” components? Specifically, if xi(k) denotes the size of batch k, then the expression for p̂i,m in Eq. (21) implicitly allows for off-spec components (which do not fit in any Ci,m), since generally mMk=1Kxi,m(k)xi(k), while at the same time p̂i,1++p̂i,M=1 satisfying Eq. (2). Yet, to explicitly account for the possibility of “off-spec” components letp̂i,0=1mMk=1Kxi,m(k)k=1Kxi(k),iI,

denote the observed frequency of type-i reject components, as the maximum-likelihood estimate of the true underlying reject probability p0,i. Then by adding a matching class 0 with output weight η0, by Eq. (13) one can simply rescale the approximate solution to the selective matching problem in Eq. (15) to account for the reject probability:(22) x̂i*(q)=11pi,0[qF(x¯*(q))]x¯i*(q),iI,(22)

where q>0 is any given output target and x¯i*(q) is given in Thm. 2 (for pi,mp̂i,m). Hence, we can decouple the consideration of off-spec components from the main analysis (which is purely “on-spec”) and then rescale the approximate solution ex post.

(iii) Net Unit Values.

The weighted total output in Eq. (4) requires a vector of positive weights η=(η1,,ηM). Since the quality of the assembled mechanical oscillators does not depend on the index m of the matching grade, it is most natural for the firm to set all ηm=1 and consider standard output, i.e., Q=Y1++YM. In standard selective-manufacturing applications, no grade is privileged over others (except possibly for a designated “reject” grade of negligible net unit value); in that case, the grade sorting satisfies the sole purpose of sharpening the precision of the assembly output given the quality imprecisions of the component inputs.

(iv) Unit Input Costs.

The cost function in the selective matching problem ( ) depends on the vector c=(c1,c2) of the unit costs for the two component types. However, as evident from Thm. 1, for any given output quantity q>0 the optimal input vector x*(q) depends only on the ratio γ=c1/c2 of the unit costs, so that there is no need to know their absolute values to determine the composition of the firm’s optimal input portfolio. We assume that the unit cost of a balance wheel is about 10 to 20 times the unit cost of a silicon hairspring, so γ[0.05,0.1]. In the numerical example, we set c=(1,10), so γ=0.1; the results for γ=0.05 are qualitatively very similar.

4.2 Data Generation

To generate synthetic samples representing the observed stochastic supply characteristics, assume that the true distribution of the normalized type-i component characteristic,Zi=SiμSiσSi,

follows a standard normal distribution with cdf Φ(·), where μSi(ai,bi) is the mean and σSi>0 the standard deviation of the (stationary) component population. Given the distributional assumption, the multinomial probabilities (conditional on parts being on-spec) can be obtained by adjusting Eq. (1) as follows:pi,m=11pi,0(Φ(si,mμSiσSi)Φ(si,m1μSiσSi)),(i,m)I×M.

Note that with positive probability some realizations si of the random component characteristic Si may be “off-spec,” in which case we use input scaling as noted in Sec. 4.1 (ii). The off-spec probability in the component population is pi,0=1(Φ((biμSi)/σSi)Φ((aiμSi)/σSi)). As explained in Remark 5 on off-spec components, the renormalization of the multinomial probabilities has no effect on the likelihood ratios in Eq. (13), so that it is enough to rescale the approximate solution as in Eq. (22), an approach which is implemented here. Finally, we assume that both distributions are centered, so μSi=(ai+bi)/2 for iI, but that the coefficient of variation is twice as large for hairsprings than for balance wheels, with σS1/μS1=2% and σS2/μS2=1%.

4.3 Performance Analysis

Let (a1,b1)=(2.94,3.06)×107 [in Nm/rad] be the boundaries of the hairspring stiffness and (a2,b2)=(4.655,4.845)×1010 [in kgm2] the boundaries of the balance-wheel inertia (in either case corresponding to a spread of 2% around a central value), so β=a2/a1=b2/b1, as required in Sec. 4.1 (i) where we also provide the optimal set of breakpoints for the movement frequency of 1/t0=4Hz, and thus the optimal matching classes Ci,m. Clearly, the maximum matching error e decreases in the number of matching classes M2, which is a choice variable. For practical purposes it is useful to interpret e/t0 in terms of “deviation per day” (dpd) [in seconds/day, i.e., s/d] by multiplying numerator and denominator by 345,600 ( =24×3,600×4). Thus, since 4×t0=1s, we obtain that e/t0=dpd=(345,600×e)/(1d), so(23) M(dpd)=(log(1+dpd1dpd))1log(β),(23)

where β1.040816327. That is, Eq. (23) provides the number of matching classes necessary to guarantee a given dpd, as shown in Fig. 5. For example, it is M(60)=29, M(30)=58, and M(15)=116. Finally, for a rather fine partition of part characteristics into M(5)=346 matching classes it would be possible (at least theoretically) to achieve the COSC standard of ±5s/d without any watchmaker intervention after assembly. Indeed, the Contrôle Officiel Suisse des Chronomètres (COSC) requires mechanical watch movements to run within –4s and +6s per day to issue a “chronometer”' certification, which is equivalent to a dpd of 5. For practical purposes, and to stay exactly in the framework of Weber 2021 used in Sec. 4.1 (i), we round up to the smallest even number of matching classes, thus considering 2M(dpd)/2 instead of M(dpd) in Eq. (23), as indicated in Fig. 5.

To guarantee practical implementability and feasibility, the approximate solution x̂*(q) in Eq. (15) is rounded to an input vector x̂*(q), with the next-highest integers as components (i.e., x̂i*(q), for iI). An integer discretization (determined by optimality) is also applied to the solution x*(q) of the original selective matching problem ( ). This solution discretization accounts for some of the fluctuations in Fig. 6, which depicts the optimal costs (in terms of C and Ĉ; cf. Eq. (17)) and optimal inputs. To ensure comparability, the costs are normalized to the input cost C0(q)=(c1+c2)q without matching classes (where M=1), and the input solutions are normalized to the “overhead-free” input q for all types iI in that case. As shown in Fig. 6(a), the normalized cost decreases in the target quantity q, which reflects a risk-pooling effect implied by the sublinear increase of the standard deviation of the part characteristics in the amount ordered. It is also apparent that the deviation between the optimal cost C (for optimal integer inputs determined via grid search) and the cost approximation Ĉ is quite negligible for realistic output targets. It is also apparent that as the dpd-precision requirement becomes more demanding (i.e., smaller), the normalized cost increases very fast—well exceeding 4 for COSC chronometer specifications. Fig. 6(b) illustrates (for γ=0.1) that these cost increases are driven by a massive over-ordering of the 10-times cheaper hairsprings as “buffer components.” The over-ordering increases both in the production target and the precision requirement.

5 Conclusion

The envelope approximation of the firm’s output objective proved effective in determining a simple closed-form solution for the cost-optimal input portfolio in Eqs. (13) and (22), which includes the possibility of scrap parts. In that solution, somewhat intentionally, the notation of the index set I for the different part types (e.g., hairsprings and balance wheels in the watchmaking application) has been used for displaying the results. Indeed, an extension to I={1,,I} with I>2 part types (e.g., balls, inner/outer rings, and cages, in the case of a ball-bearing assembly) is quite straightforward, and requires no adjustment of these formulas. While this naturally applies to the envelope output in Eq. (9), the same cannot be said for the expected output in Eq. (6), for which a simple closed-form expression may not be within easy reach for more than two input part types.

From a practical viewpoint, a doubling of the number of matching classes tends to roughly double the precision of output parts. 5 As noted in Sec. 3.6, it also leads to almost a doubling of the relative input overage, which is the number of extra units required compared with perfect matching, where the latter corresponds to random matching with equal inputs without quality concerns—equivalent to using a single matching class. That is, all else equal, improving the output tolerance by a certain factor basically multiplies the overage cost by that factor. In addition, one should not overlook the fact that unused parts appear in matching classes for which the counterpart of the other type is stocked-out and on backorder. Yet, the residual inventory vector serves as a buffer for the next order and thus increases expected output relative to an empty stock. A formulation of the corresponding dynamic reordering policy, taking into account parts residuals, is an interesting topic for future research.

Appendix: Proofs

Proof of Lemma 1. Let mM be a given matching grade and let x+2 be an arbitrary input vector. By Eq. (3) the firm’s random grade-m output (conditional on x),Ym=min{X1,m,X2,m}=X1,m+min{0,X2,mX1,m}=X1,m+min{0,Δm},

can be viewed as a sum of the grade-m type-1 output and the negative part of the grade-m parts defect, Δm=X2,mX1,m. As the difference of two correlated and normally distributed random variables, the latter follows a normal distribution with mean δm=μ2,mμ1,m, and variance σm2=σ1,m2+σ2,m2. Thus, the expected grade-m output is Fm(x)=E[Ym|x]=μ1,m+E[min{0,Δm}|x], whereE[min{0,Δm}|x]=δm+σmE[min{δmσm,Δmδmσm}|x].

On the other hand, with (Δmδm)/σm following a standard normal distribution,E[min{δmσm,Δmδmσm}|x]=δmσm(1Φ(δmσm))+δm/σmξdΦ(ξ).

Taking into account the symmetry of ϕ(·) the last term can be computed explicitly,δ/σmξdΦ(ξ)=[exp(ξ2/2)2π]ξξ=δm/σm=ϕ(δmσm)=ϕ(δmσm).

Hence, the expected grade-m output becomesFm(x)=μ1,m+δmδm(1Φ(δmσm))σmϕ(δmσm).

By virtue of the fact that Φ(ξ)=1Φ(ξ), for all ξ, the preceding expression simplifies,Fm(x)=Φmμ1,m+(1Φm)μ2,mϕmσm,

where we have set Φm=Φ(δm/σm) and ϕm=ϕ(δm/σm). This establishes the claim.

Proof of Lemma 2. Let mM be a given matching grade and let x=(x1,x2)+2 be an arbitrary input vector. For any iI, the number of type-i components in grade-m follows a normal distribution with cdf Gi,m(·|xi). The cdf of the random grade-m output Ym=min{X1,m,X2,m} becomes(Ymξ|θ)=G1,m(ξ|x1)+G2,m(ξ|x2)G1,m(ξ|x1)G2,m(ξ|x2),ξ0.

Consider now an increase of type-i input from xi to x̂i>xi, and note thatGi,m(ξ|xi)xi=pi,m+(ξ/xi)2σi,mϕ(ξμi,mσi,m)<0,xi0.

This implies first-order stochastic dominance (FOSD) of the type-i quantity distribution after the quantity increase, Gi,m(ξ|x̂i)<Gi,m(ξ|xi),ξ0. Thus, if we denote by (X̂1,m,X̂2,m) the grade-m yields under x̂, then (X1,m,X2,m)FOSD(X̂1,m,X̂2,m), where FOSD marks the (strict) FOSD-order. But this implies that Ym=min{X1,m,X2,m}FOSDmin{X̂1,m,X̂2,m}=Ŷm, so that we can now compare the expected values, Fm(x)=E[Ym|x]<E[Ŷm|x̂]=Fm(x̂), establishing the claimed monotonicity.

Proof of Theorem 1. Let (q,c)++×++2. The Lagrangian associated with the cost-minimization problem ( ) is L(x;λ)=c1x1+c2x2λ(F(x)q), where the adjoint variable λ0 measures the “shadow cost” of relaxing the output-attainment constraint. The first-order necessary optimality conditions are therefore(24) L(x;λ)xi=ciλF(x)xi=0,(24)

for all iI (Zorich, 2004, p. 527ff). Since by La. 2 it is F(x)/xi>0, which in particular applies to input vectors x such that F(x)=q, the output-attainment constraint F(x)q must be binding, so that (by complementary slackness) necessarily λ>0 and Eq. (7) must be satisfied if x=x*(q) is to be a solution candidate for the selective matching problem ( ). Substitution of Eqs. (5) and (6) (in La. 1) into Eq. (24) for i{1,2} yields the marginal rate of technical substitution,F(x)/x1F(x)/x2=mMηmFm(x)/x1mMηmFm(x)/x2=c1c2=γ,

whereFm(x)x1=p1,mΦmδmΦmx1σmϕmx1p1,m(1p1,m)2σmϕm,Fm(x)x2=p2,m(1Φm)δmΦmx2σmϕmx2p2,m(1p2,m)2σmϕm,

andδmΦmxi=σmϕmxi=δmσm((1)i+1pi,m+δmσmpi,m(1pi,m)2Ci,m2σm)ϕm,iI.

As a result,Fm(x)x1=p1,m(Φm1p1,m2σmϕm)andFm(x)x2=p2,m(1Φm1p2,m2σmϕm),

which yields Eq. (8) for x=x*(q). This concludes our proof.

Proof of Lemma 3. Fix any input bundle x=(x1,x2)+2. By Jensen’s inequality, it isE[Ym|x]=E[min{X1,m,X2,m}|x]min{E[X1,m|x1],E[X2,m|x2]},mM.

Hence, using Eq. (5) and the linearity of the expectation operator, one obtainsF(x)=E[Q|x]=mMηmE[Ym|x]mMηmmin{p1,mx1,p2,mx2}=F¯(x),

which concludes our proof.

Proof of Lemma 4. Let x=(x1,x2)++2. Consider first the case where δm(x)=p2,mx2p1,mx10, for some mM. Then by La. 1 it is0min{p1,mx1,p2,mx2}Fm(x)σm(x)ϕ(δm(x)σm(x))δm(x)σm(x)(1Φ(δm(x)σm(x))).

On the other hand, since (d/dξ)(ϕ(ξ)ξ(1Φ(ξ)))=(1Φ(ξ))<0, for all ξ, the expression on the right-hand side of the preceding equality is downward-sloping in δm/σm>0, so(25) δm(x)00min{p1,mx1,p2,mx2}Fm(x)σm(x)12π=ϕ(0).(25)

A similar reasoning applies for δm(x)0, in which case La. 1 yields0min{p1,mx1,p2,mx2}Fm(x)σm(x)ϕ(δm(x)σm(x))+δm(x)σm(x)Φ(δm(x)σm(x)).

Thus, taking account of the fact that (d/dξ)(ϕ(ξ)+ξΦ(ξ))=Φ(ξ)>0, for all ξ, the expression on the right-hand side of the preceding inequality must be upward-sloping in δm/σm<0, whence(26) δm(x)00min{p1,mx1,p2,mx2}Fm(x)σm(x)12π=ϕ(0).(26)

Combining Eqs. (25) and (26), and repeating this exercise for all grades, yields that in fact(27) 0min{p1,mx1,p2,mx2}Fm(x)σm(x)12π,mM.(27)

Using the definitions of the weighted output F and the (equally weighted) envelope output F¯, one obtains by mere summation that 0F¯(x)F(x)σ(x)/2π, where σ(x)=mMσm(x). The (maximum) approximation error is therefore at most linear in the aggregate standard deviation σ(x), so R(x)σ(x)1/2π, which concludes our proof.

Proof of Lemma 5. To exclude trivialities, we restrict attention to positive inputs, so x=(x1,x2)++2. By Eq. (27) we have0min{p1,mx1,p2,mx2}Fm(x)σm(x)12π,mM,

so thatFm(x)σm(x)min{p1,mx1,p2,mx2}σm(x)12πmin{p1,mx1,p2,mx2}2p1,mp2,m12π,mM.

On the other hand, if we set pmin=min(i,m)I×M{pi,m}>0, thenmin{p1,mx1,p2,mx2}2p1,mp2,mpmin/21pminmin{x1,x2},mM,

where the right-hand side of the preceding inequality does not depend on the grade m. Hence, it is easy to verify thatmin{x1,x2}4π(1pminpmin)Fm(x)σm(x)pmin/81pminmin{x1,x2},mM.

The latter implies:min{x1,x2}4π(1pminpmin)F(x)σ(x)pmin/81pminmin{x1,x2}.

Combining this with La. 4, the relative error decreases in the smallest input,r(x)=R(x)/σ(x)F(x)/σ(x)1pminpmin2/πmin{x1,x2},

as long as the scale of the application is sufficiently large, somin{x1,x2}4π(1pminpmin).

Thus, for any ε(0,1):min{x1,x2}x¯(ε)=4π(1pminpmin)max{1,1ε2}=4πε2(1pminpmin)r(x)ε,

as claimed. But this means thatmin{x1,x2}x¯(ε)F¯(x1,x2)1+εF(x1,x2),

which in turn implies the minimal output objective,q¯(ε)=F¯(x¯(ε),x¯(ε))1+ε=(=1Lη̂min{ρ,1})x¯(ε)1+ε.

Finally, we note that x¯(ε) is O(ε2), so that for small ε>0 the minimum envelope input q¯(ε) is also O(ε2), as claimed. This concludes the proof.

Proof of Theorem 2. As a positive linear combination of concave functions, the envelope output,F¯(x)==1Lη̂min{ρx1,x2},x=(x1,x2)+2,

is also a concave function. Thus, for any q>0 the upper contour set, U(q)={x+2:F¯(x)q}, is convex. In addition, it is also possible to achieve at least as much envelope output with more inputs, soxU(q)x++2U(q).

As a result, the upper contour set U(q) is a polygonal chain (i.e., a connected series of line segments),P(q)={(x1,x2)+2:x1[qx¯1L,qx¯11],x2=qmax{1,,L}{x¯2s(x1x¯1)}},

in the sense that U(q)=P(q)++2, where the vertices x¯=(x1,x2) of the polygonal chain P(1) are implied by the conditions(28) ρx¯1=x¯2andF¯(x¯)=1,(28)

for {1,,L}; the edges of P(1) have the slopes of absolute values s1=0, and s=(x¯2x¯11)/(x¯11x¯1), for all {2,,L}. The vertex conditions in Eq. (28) implyk=1η̂kρkx¯1+k=+1Lη̂kx¯2=x¯1(k=1Lη̂kmin{ρk,ρ})=1,{1,,L},

taking account of the fact that the likelihood ratios ρ are size-ordered (i.e., ρ1<<ρL) by construction. Hence, the coordinates of the vertices of P(1) are fully described byx¯1=(k=1Lη̂kmin{ρk,ρ})1andx¯2=ρx¯1,

for all {1,,L}. By the Hahn-Banach theorem (Berge, 1963, p. 157) there exists a hyperplane separating the convex set {x=(x1,x2)+2:c1x1+c2x2<C}, for some positive constant C, from the convex set U(q)=qU(1) as long as they are disjoint, i.e., as long asc1x1+c2x2<CF¯(x1,x2)<q.

In particular, if c1x¯1+c2x¯2=C¯(1), then necessarily F¯(x¯)=1. The aforementioned separation for C=C¯(1) occurs at the vertex x¯*, with*=max{{1,,L}:γ=s<c2c1},

since that avoids an intersection of the separating hyperplane with the budget set {x=(x1,x2)+2:c1x1+c2x2<C¯(1)}. Using the fact that the cost objective and the envelope output are each homogeneous of degree 1, we obtain that the input vector x¯*=qx¯* solves the envelope optimization problem ( ).

Proof of Lemma 6. Consider any fixed target output q>0 and let pmin=min(i,m)I×M{pi,m} the minimal matching probability (which is positive). By La. 5, the right-hand side of Eq. (19) is such thatqq¯(ε)=4πε2(1pminpmin)qF(x¯*(q))1q(1+ε)F¯(x¯*(q))1=ε,

where we have used the fact that F¯(x¯*(q))=q, as pointed out in footnote 3. Thus, by eliminating ε, the relative cost overage is such thatĈ(q)C(q)C(q)(1pmin(π/4)pmin)1q,

which corresponds to Eq. (20). This concludes our proof.

Acknowledgements

The author would like to thank several anonymous referees, as well as participants of the 2022 CORS/INFORMS International Conference in Vancouver, Canada, and the 2022 EURO Conference in Espoo, Finland, for helpful suggestions.

Notes on Contributor

Thomas A. Weber is full professor of Operations, Economics and Strategy at the Swiss Federal Institute of Technology in Lausanne (EPFL). Earlier he was a faculty member in the Economics and Finance Group of the Department of Management Science and Engineering at Stanford University. Prof. Weber is an Ingénieur des Arts et Manufactures (Ecole Centrale Paris) and a Diplom-Ingenieur in Electrical Engineering (Technical University Aachen). He holds master’s degrees in Technology and Policy and Electrical Engineering and Computer Science from MIT, and a PhD in Applied Economics and Managerial Science from the Wharton School of the University of Pennsylvania. He was a visiting faculty in Economics at Cambridge University and in Mathematics at Moscow State University. Between 1998 and 2002, he was a senior consultant with the Boston Consulting Group. His current research interests include the economics of information and uncertainty, robust optimization, and strategy. Prof. Weber’s more than 100 scholarly publications have appeared, for example, in American Economic Journal: Microeconomics, Decision Support Systems, Economics Letters, Economic Theory, Health Care Management Science, Information Systems Research, Journal of Economic Dynamics and Control, Journal of Environmental Economics and Management, Journal of Management Information Systems, Journal of Mathematical Economics, Journal of Optimization Theory and Applications, Journal of Regulatory Economics, Management Science, Mathematical Reviews, Medical Decision Making, Operations Research, Operations Research Letters, Optimal Control: Applications and Methods, Physical Review E, Sustainable Production and Consumption, as well as Theory and Decision. He has been associate editor for Management Science and is the author of Optimal Control Theory with Applications in Economics (MIT Press, 2011).

Notes

1 The hairspring features a stiffness (described by ϰ [in Nm/rad]) while the balance wheel is characterized by a moment of inertia (given by I [in kg m2]). Their ratio determines the oscillation period, T=2πI/ϰ.

2 As established in the proof of La. 5, a (very conservative) minimal envelope output is obtained by setting q¯(ε)=(=1Lη̂min{ρ,1})x¯(ε)/(1+ε), where x¯(ε)=(4/π)(1/ε2)(1pmin)/pmin is the corresponding minimal input, with pmin=min(i,m)I×M{pi,m}>0 being the smallest grade-achievement probability across all part types. For example, given pmin=10% and ε=10%, the minimum input becomes x¯(ε)=(60)2/π1,146 units.

3 That x̂*(q)=[qF(x¯*(q))]x¯*(q) is feasible in (*) can be seen as follows. By Eq. (5) the production function F is homogeneous of degree 1 (i.e., F(αx)αF(x), for any α>0). As a consequence, F(x̂*(q))=F([qF(x¯*(q))]x¯*(q))=[qF(x¯*(q))]F(x¯*(q))=q. Thus, not only is x̂*(q) feasible in (*), but it also satisfies output efficiency (since F(x̂*(q))q is in fact binding).

4 Perfect matching happens when there is only a single matching class (i.e., for M=1); then all parts are matched. Naturally, this indiscriminate approach propagates lax input tolerances into the assembly which drives up other costs (e.g., in terms of rework or rejected output parts for a failure to satisfy tight output tolerances).

5 See, e.g., the deviation-per-day (dpd) precision of a mechanical watch movement in Fig. 5, which increases by a factor of 2 (from 35s to 17.5s) when doubling the matching classes (from M=50 to M=100).

Figure 1: Sorting x1=10 type-1 and x2=11 type-2 parts with characteristics (s1,s2) in S=S1×S2; random matching within M=5 matching classes Cm=C1,m×C2,m, for m in M={1,,M}; three type-1 components and four type-2 components remain unmatched.

Figure 1: Sorting x1=10 type-1 and x2=11 type-2 parts with characteristics (s1,s2) in S=S1×S2; random matching within M=5 matching classes Cm=C1,m×C2,m, for m in M={1,…,M}; three type-1 components and four type-2 components remain unmatched.

Figure 2: Selective assembly system, featuring the sorting of two types of inputs into M2 matching classes according to the part characteristics, random matching across corresponding classes, and subsequent assembly. The scalar output Q is a random variable.

Figure 2: Selective assembly system, featuring the sorting of two types of inputs into M≥2 matching classes according to the part characteristics, random matching across corresponding classes, and subsequent assembly. The scalar output Q is a random variable.

Figure 3: The solution x*(q) to the selective matching problem ( ) can be approximated by scaling the solution x¯*(q) to the envelope optimization problem ( ). This yields the “approximate selective matching solution,” x̂*(q)=[q/F(x¯*(q))]x¯*(q), feasible in ( ); cf. Exs. 1–3.

Figure 3: The solution x*(q) to the selective matching problem ( ∗) can be approximated by scaling the solution x¯*(q) to the envelope optimization problem ( ∗∗). This yields the “approximate selective matching solution,” x̂*(q)=[q/F(x¯*(q))] x¯*(q), feasible in ( ∗); cf. Exs. 1–3.

Figure 4: (a) Relative error r(x¯*(q)), for I=2 and I=4, as a function of q[500,10000]; (b) Ratio of relative errors, for I=4 to I=2, as a function of q[500,10000].

Figure 4: (a) Relative error r(x¯*(q)), for I=2 and I=4, as a function of q∈[500,10000]; (b) Ratio of relative errors, for I=4 to I=2, as a function of q∈[500,10000].

Figure 5: Smallest (even) number of matching classes to guarantee a maximum daily deviation, dpd{1,,60} [s/d]. COSC chronometer standard is achieved for dpd5 [s/d].

Figure 5: Smallest (even) number of matching classes to guarantee a maximum daily deviation, dpd∈{1,…,60} [s/d]. COSC chronometer standard is achieved for dpd≤5 [s/d].

Figure 6: Optimal and approximately optimal cost (a) and input (b) as a solution to ( ) and ( ), for a target quantity q{500,1000,2000} units and for an output tolerance dpd60 [s/d].

Figure 6: Optimal and approximately optimal cost (a) and input (b) as a solution to ( ∗) and ( ∗∗), for a target quantity q∈{500,1000,2000} units and for an output tolerance dpd≤60 [s/d].
Supplemental material

Reproducibility Report

Download PDF (183.3 KB)

References

  • Asha, A., S. Kannan, and V. Jayabalan (2008). Optimization of clearance variation in selective assembly for components with multiple characteristics. International Journal of Advanced Manufacturing Technology 38(9-10), 1026–1044.
  • Berge, C. (1963). Topological Spaces. Edinburgh, UK: Oliver and Boyd.
  • Berry, A. (1941). The accuracy of the Gaussian approximation to the sum of independent variates. Transactions of the American Mathematical Society 49(1), 122–136.
  • Buckingham, E. (1921). Principles of Interchangeable Manufacturing. New York, NY: Industrial Press.
  • Caputo, A. and G. Di Salvo (2019). An economic decision model for selective assembly. International Journal of Production Economics 207, 56–69.
  • Coullard, C., A. Gamble, and P. Jones (1998). Matching problems in selective assembly operations. Annals of Operations Research 76, 95–107.
  • Duenyas, I., M. Keblis, and S. Pollock (1997). Dynamic type mating. Management Science 43(6), 751–763.
  • Esseen, C.-G. (1942). On the Liapunoff limit of error in the theory of probability. Arkiv för Matematik, Astronomi och Fysik A28(9), 1–19.
  • Esseen, C.-G. (1956). A moment inequality with an application to the central limit theorem. Skandinavisk Aktuarietidskrift 39, 160–170.
  • Kannan, S. and G. Pandian (2021). A new selective assembly model for achieving specified clearance in radial assembly. Materials Today: Proceedings 46(17), 7411–7417.
  • Kwon, H.-M., K.-J. Kim, and M. Chandra (1999). An economic selective assembly procedure for two mating components with equal variance. Naval Research Logistics 46(7), 809–821.
  • Leontief, W. (1941). The Structure of American Economy 1919–1929: An Empirical Application of Equilibrium Analysis. Cambridge, MA: Harvard University Press.
  • Mansoor, E. (1961). Selective assembly – Its analysis and applications. International Journal of Production Research 1(1), 13–24.
  • Matsuura, S. (2011). Optimal partitioning of probability distributions under general convex loss functions in selective assembly. IIE Transactions 40(9), 1545–1560.
  • Mease, D., V. Nair, and A. Sudjianto (2004). Selective assembly in manufacturing: Statistical issues and optimal binning strategies. Technometrics 46(2), 165–175.
  • Pugh, G. (1986). Partitioning for selective assembly. Computers and Industrial Engineering 11(1-4), 175–179.
  • Rezaei Aderiani, A., K. Wärmefjord, R. Söderberg, and L. Lindkvist (2019). Developing a selective assembly technique for sheet metal assemblies. International Journal of Production Research 57(22), 7174–7188.
  • Shevtsova, I. (2011). On the absolute constants in the Berry-Esseen type inequalities for identically distributed summands. Working Paper, Lomonosov Moscow State University, Moscow, Russia. arXiv:1111.6554v1.
  • Tan, M. and C. Wu (2012). Generalized selective assembly. IIE Transactions 44(1), 27–42.
  • Weber, T. (2021). Minimum-error classes for matching parts. Operations Research Letters 49(1), 106–112.
  • Weber, T. (2022). Optimal matching of random parts. Journal of Mathematical Economics 101, 102688:1–14.
  • Zorich, V. (2004). Mathematical Analysis I. New York, NY: Springer.