我正在尝试找到与平均重量更接近的最小化重量。我当前的问题是使用SLSQP求解器时,无法找到完全满足目标的重量。有没有其他求解器可以解决我的问题?或者有什么数学上的建议。请帮助我。
我现在的数学公式是
**min(∑|x-mean(x)|)** **s.t.** Aw-b=0, w>=0 **bound** 0.2'<'x<0.5
数据
nRow# Variable1 Variable2 Variable3 1 3582.00 233445193.00 559090945.00 2 3394.00 217344811.00 496500751.00 3 3356.00 237746918.00 493639029.00 4 3256.00 219204892.00 461547877.00 5 3415.00 225272825.00 501057960.00 6 3505.00 242819442.00 505073223.00 7 3442.00 215258725.00 490458632.00 8 3381.00 227681178.00 503102998.00 9 3392.00 215189377.00 487026744.00 w1 w2 w3 Target 8531.00 429386951.00 1079115532.00
问题:找到与平均值更接近的最小化重量
Python 代码:
A = np.array([[3582.000000, 3394.000000, 3356.000000, 3256.000000, 3415.000000, 3505.000000, 3442.000000, 3381.000000, 3392.000000], [233445193.000000, 217344811.000000, 237746918.000000, 219204892.000000, 225272825.000000, 242819442.000000, 215258725.000000, 227681178.000000, 215189377.000000], [559090945.000000, 496500751.000000, 493639029.000000, 461547877.000000, 501057960.000000, 505073223.000000, 490458632.000000, 503102998.000000, 487026744.000000]])b = np.array([8531, 1079115532, 429386951]) n=9def fsolveMin(A,b,n): # The constraints: Ax = b def cons(x): return A.dot(x)-b cons = ({'type':'eq','fun':cons}, {'type':'ineq','fun':lambda x:x[0]}) # The minimizing constraints: the total absolute difference between the coefficients def fn(x): return np.sum(np.abs(x-np.mean(x))) # Initialize the coefficients randomly z0 = abs(np.random.randn(len(A[1,:]))) # Set up bound # bnds = [(0, None)]*n # Solve the problem sol = minimize(fn, x0 = z0, constraints = cons, method = 'SLSQP', options={'disp': True}) #expected 35% print(sol.x) print(A.dot(sol.x)) #print(fn(sol.x))print(str(fsolveMin(A,b,n))+"\n\n")
回答:
为了让你了解使用像scipy的linprog这样的低级工具会变得多么臃肿(因为我们必须模仿它们的标准形式):
基本思路是:
- 添加一个辅助变量来表示均值,如Erwin所建议的
- 添加n个辅助变量来处理绝对值,如lpsolve文档中所述
- (我使用n个额外辅助变量作为
x-mean
的临时变量)
现在这个例子可以工作了。
对于你的数据,你需要注意边界条件。确保问题仍然可行。问题本身相当不稳定,因为Ax=b
使用了硬约束;通常你会在这里最小化某个范数/最小二乘(不再是LP;QP/SOCP)并将这个误差添加到目标函数中!
在某些时候,可能需要将求解器从method='simplex'
切换到method='interior-point'
(仅在scipy 1.0及以后版本可用)。
替代方案:
使用cvxpy,公式会简单得多(提到的两种变体),而且你可以免费获得一个相当好的求解器(ECOS)。
代码:
import numpy as npimport scipy.optimize as sponp.set_printoptions(linewidth=120)np.random.seed(1)""" Create random data """M, N = 2, 3A = np.random.random(size=(M,N))x_hidden = np.random.random(size=N)b = A.dot(x_hidden) # targetn = Nprint('Original A')print(A)print('hidden x: ', x_hidden)print('target: ', b)""" Optimize as LP """def solve(A, b, n): print('Reformulation') am, an = A.shape # Introduce aux-vars # 1: y = mean # n: z = x - mean # n: abs(z) n_plus_aux_vars = 3*n + 1 # Equality constraint: y = mean eq_mean_A = np.zeros(n_plus_aux_vars) eq_mean_A[:n] = 1. / n eq_mean_A[n] = -1. eq_mean_b = np.array([0]) print('y = mean A:') print(eq_mean_A) print('y = mean b:') print(eq_mean_b) # Equality constraints: Ax = b eq_A = np.zeros((am, n_plus_aux_vars)) eq_A[:, :n] = A[:, :n] eq_b = np.copy(b) print('Ax=b A:') print(eq_A) print('Ax=b b:') print(eq_b) # Equality constraints: z = x - mean eq_mean_A_z = np.hstack([-np.eye(n), np.ones((n, 1)), + np.eye(n), np.zeros((n, n))]) eq_mean_b_z = np.zeros(n) print('z = x - mean A:') print(eq_mean_A_z) print('z = x - mean b:') print(eq_mean_b_z) # Inequality constraints: absolute values -> x <= x' ; -x <= x' ineq_abs_0_A = np.hstack([np.zeros((n, n)), np.zeros((n, 1)), np.eye(n), -np.eye(n)]) ineq_abs_0_b = np.zeros(n) ineq_abs_1_A = np.hstack([np.zeros((n, n)), np.zeros((n, 1)), -np.eye(n), -np.eye(n)]) ineq_abs_1_b = np.zeros(n) # Bounds # REMARK: Do not touch anything besides the first bounds-row! bounds = [(0., 1.) for i in range(n)] + \ [(None, None)] + \ [(None, None) for i in range(n)] + \ [(0, None) for i in range(n)] # Objective c = np.zeros(n_plus_aux_vars) c[-n:] = 1 A_eq = np.vstack((eq_mean_A, eq_A, eq_mean_A_z)) b_eq = np.hstack([eq_mean_b, eq_b, eq_mean_b_z]) A_ineq = np.vstack((ineq_abs_0_A, ineq_abs_1_A)) b_ineq = np.hstack([ineq_abs_0_b, ineq_abs_1_b]) print('solve...') result = spo.linprog(c, A_ineq, b_ineq, A_eq, b_eq, bounds=bounds, method='simplex') print(result) x = result.x[:n] print('x: ', x) print('residual Ax-b: ', A.dot(x) - b) print('mean: ', result.x[n]) print('x - mean: ', x - result.x[n]) print('l1-norm(x - mean) / objective: ', np.linalg.norm(x - result.x[n], 1))solve(A, b, n)
输出:
Original A[[ 4.17022005e-01 7.20324493e-01 1.14374817e-04] [ 3.02332573e-01 1.46755891e-01 9.23385948e-02]]hidden x: [ 0.18626021 0.34556073 0.39676747]target: [ 0.32663584 0.14366255]Reformulationy = mean A:[ 0.33333333 0.33333333 0.33333333 -1. 0. 0. 0. 0. 0. 0. ]y = mean b:[0]Ax=b A:[[ 4.17022005e-01 7.20324493e-01 1.14374817e-04 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 3.02332573e-01 1.46755891e-01 9.23385948e-02 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]]Ax=b b:[ 0.32663584 0.14366255]z = x - mean A:[[-1. -0. -0. 1. 1. 0. 0. 0. 0. 0.] [-0. -1. -0. 1. 0. 1. 0. 0. 0. 0.] [-0. -0. -1. 1. 0. 0. 1. 0. 0. 0.]]z = x - mean b:[ 0. 0. 0.]solve... fun: 0.078779576294411263 message: 'Optimization terminated successfully.' nit: 10 slack: array([ 0.07877958, 0. , 0. , 0. , 0.07877958, 0. , 0.76273076, 0.68395118, 0.72334097]) status: 0 success: True x: array([ 0.23726924, 0.31604882, 0.27665903, 0.27665903, -0.03938979, 0.03938979, 0. , 0.03938979, 0.03938979, 0. ])x: [ 0.23726924 0.31604882 0.27665903]residual Ax-b: [ 5.55111512e-17 0.00000000e+00]mean: 0.276659030053x - mean: [-0.03938979 0.03938979 0. ]l1-norm(x - mean) / objective: 0.0787795762944
现在对于你的原始数据,事情变得棘手了!
你需要:
- 缩放你的数据,因为变量的量级会影响求解器
- 根据你的任务,你可能需要分析缩放的影响/反转它
- 使用内部点求解器
- 注意边界条件
示例:
A = np.array([[3582.000000, 3394.000000, 3356.000000, 3256.000000, 3415.000000, 3505.000000, 3442.000000, 3381.000000, 3392.000000], [233445193.000000, 217344811.000000, 237746918.000000, 219204892.000000, 225272825.000000, 242819442.000000, 215258725.000000, 227681178.000000, 215189377.000000], [559090945.000000, 496500751.000000, 493639029.000000, 461547877.000000, 501057960.000000, 505073223.000000, 490458632.000000, 503102998.000000, 487026744.000000]])b = np.array([8531., 1079115532., 429386951.])A /= 10000. # 缩放b /= 10000. # 缩放bounds = [(-50., 50.) for i in range(n)] + \...result = spo.linprog(c, A_ineq, b_ineq, A_eq, b_eq, bounds=bounds, method='interior-point')
输出:
solve... con: array([ 3.98410760e-09, 1.18067724e-08, 8.12879938e-04, 1.75969041e-03, -3.93853838e-09, -3.96305566e-09, -4.10043555e-09, -3.94957667e-09, -3.88362764e-09, -3.89452381e-09, -3.95134592e-09, -3.92182287e-09, -3.85762178e-09]) fun: 52.742900697626389 message: 'Optimization terminated successfully.' nit: 8 slack: array([ 5.13245265e+01, 1.89309145e-08, 1.83429094e-09, 4.28687782e-09, 1.03726911e-08, 2.77000474e-09, 1.41837413e+00, 6.75769654e-09, 8.65285462e-10, 2.78501844e-09, 3.09591539e-09, 5.27429006e+01, 1.30944103e-08, 5.32994799e-09, 3.15369669e-08, 2.51943821e-09, 7.54848797e-09, 3.22510447e-09]) status: 0 success: True x: array([ -2.51938304e+01, 4.68432810e-01, 2.68398831e+01, 4.68432822e-01, 4.68432815e-01, 4.68432832e-01, -2.40754247e-01, 4.68432818e-01, 4.68432819e-01, 4.68432822e-01, -2.56622633e+01, -7.91749954e-09, 2.63714503e+01, 4.40376624e-09, -2.52137156e-09, 1.43834811e-08, -7.09187065e-01, 3.95395716e-10, 1.17990950e-09, 2.56622633e+01, 1.10134149e-08, 2.63714503e+01, 8.69064406e-09, 7.85131955e-09, 1.71534858e-08, 7.09187068e-01, 7.15309226e-09, 2.04519496e-09])x: [-25.19383044 0.46843281 26.83988313 0.46843282 0.46843282 0.46843283 -0.24075425 0.46843282 0.46843282]residual Ax-b: [ -1.18067724e-08 -8.12879938e-04 -1.75969041e-03]mean: 0.468432821891x - mean: [ -2.56622633e+01 -1.18805552e-08 2.63714503e+01 4.54189575e-10 -6.40499920e-09 1.04889573e-08 -7.09187069e-01 -3.52642715e-09 -2.67771227e-09]l1-norm(x - mean) / objective: 52.7429006758
编辑
这里是一个基于SOCP的最小二乘(软约束)方法,我建议从数值稳定性角度考虑使用这种方法。这种方法可以并且应该根据你的需求进行调整。它使用了前面提到的cvxpy建模工具和ECOS求解器来实现。
基本思路是:
- 不是:
min(l1-norm(x - mean(x)) st. Ax=b
- 解决:
min(l2-norm(Ax-b) + c * l1-norm(x - mean(x)))
- 其中
c
是非负的权衡参数 - 小
c
:Ax=b
更重要 - 大
c
:x - mean(x)
更重要
- 其中
对于你的数据和边界条件-50, 50
以及c=1e-3
的示例代码和输出:
import numpy as npimport cvxpy as cvx""" DATA """A = np.array([[3582.000000, 3394.000000, 3356.000000, 3256.000000, 3415.000000, 3505.000000, 3442.000000, 3381.000000, 3392.000000], [233445193.000000, 217344811.000000, 237746918.000000, 219204892.000000, 225272825.000000, 242819442.000000, 215258725.000000, 227681178.000000, 215189377.000000], [559090945.000000, 496500751.000000, 493639029.000000, 461547877.000000, 501057960.000000, 505073223.000000, 490458632.000000, 503102998.000000, 487026744.000000]])b = np.array([8531., 1079115532., 429386951.])n = 9# A /= 10000. 缩放会是一个好主意# b /= 10000. """""" SOCP-based least-squares approach """def solve(A, b, n, c=1e-1): x = cvx.Variable(n) y = cvx.Variable(1) # mean lower_bounds = np.zeros(n) - 50 # -50 upper_bounds = np.zeros(n) + 50 # 50 constraints = [] constraints.append(x >= lower_bounds) constraints.append(x <= upper_bounds) constraints.append(y == cvx.sum_entries(x) / n) objective = cvx.Minimize(cvx.norm(A*x-b, 2) + c * cvx.norm(x - y, 1)) problem = cvx.Problem(objective, constraints) problem.solve(solver=cvx.ECOS, verbose=True) print('Objective: ', problem.value) print('x: ', x.T.value) print('mean: ', y.value) print('Ax-b: ') print((A*x - b).value) print('x - mean: ', (x - y).T.value)solve(A, b, n)
输出:
ECOS 2.0.4 - (C) embotech GmbH, Zurich Switzerland, 2012-15. Web: www.embotech.com/ECOSIt pcost dcost gap pres dres k/t mu step sigma IR | BT 0 +2.637e-17 -1.550e+06 +7e+08 1e-01 2e-04 1e+00 2e+07 --- --- 1 1 - | - - 1 -8.613e+04 -1.014e+05 +8e+06 1e-03 2e-06 2e+03 2e+05 0.9890 1e-04 2 1 1 | 0 0 2 -1.287e+03 -1.464e+03 +1e+05 1e-05 9e-08 4e+01 3e+03 0.9872 1e-04 3 1 1 | 0 0 3 +1.794e+02 +1.900e+02 +2e+03 2e-07 1e-07 1e+01 5e+01 0.9890 5e-03 5 3 4 | 0 0 4 -1.388e+00 -6.826e-01 +1e+02 1e-08 7e-08 9e-01 3e+00 0.9458 4e-03 7 6 6 | 0 0 5 +5.491e+00 +5.683e+00 +1e+01 1e-09 8e-09 2e-01 3e-01 0.9617 6e-02 1 1 1 | 0 0 6 +6.480e+00 +6.505e+00 +1e+00 2e-10 5e-10 3e-02 4e-02 0.8928 2e-02 1 1 1 | 0 0 7 +6.746e+00 +6.746e+00 +2e-02 3e-12 5e-10 5e-04 6e-04 0.9890 5e-03 1 0 0 | 0 0 8 +6.759e+00 +6.759e+00 +3e-04 2e-12 2e-10 6e-06 7e-06 0.9890 1e-04 1 0 0 | 0 0 9 +6.759e+00 +6.759e+00 +3e-06 2e-13 2e-10 6e-08 8e-08 0.9890 1e-04 2 0 0 | 0 010 +6.758e+00 +6.758e+00 +3e-08 5e-14 2e-10 7e-10 9e-10 0.9890 1e-04 1 0 0 | 0 0OPTIMAL (within feastol=2.0e-10, reltol=4.7e-09, abstol=3.2e-08).Runtime: 0.002901 seconds.Objective: 6.757722879805085x: [[-18.09169736 -5.55768047 11.12130645 11.48355878 -1.13982006 12.4290884 -3.00165819 -1.05158589 -2.4468432 ]]mean: 0.416074272576Ax-b:[[ 2.17051777e-03] [ 1.90734863e-06] [ -5.72204590e-06]]x - mean: [[-18.50777164 -5.97375474 10.70523218 11.0674845 -1.55589434 12.01301413 -3.41773246 -1.46766016 -2.86291747]]
这种方法总是会输出一个可行的解决方案(对于我们的任务),然后你可以根据观察到的残差决定它是否适合你。
正如你观察到的,0的下限是致命的,在所有公式中(看看你的数据中的量级差异!)。
这里0的下限会得到一个具有较高残差误差的解决方案。
例如:
c=1e-7
- 边界 =
0 / 15
输出:
Objective: 785913288.2410747x: [[ -5.57966858e-08 -4.74997454e-08 1.56066068e+00 1.68021234e-07 -3.55602958e-08 1.75340641e-06 -4.69609562e-08 -3.10216680e-08 -4.39482554e-08]]mean: 0.173406926909Ax-b:[[ -3.29341696e+03] [ -7.08072860e+08] [ 3.41016903e+08]]x - mean: [[-0.17340698 -0.17340697 1.38725375 -0.17340676 -0.17340696 -0.17340517 -0.17340697 -0.17340696 -0.17340697]]