# Cone Programming Cheat Sheet

I’m pretty excited about Afonso‘s latest research developments (namely, this and that), and I’ve been thinking with Jesse Peterson about various extensions, but we first wanted to sort out the basics of linear and semidefinite programming. Jesse typed up some notes and I’m posting them here for easy reference:

Many optimization problems can be viewed as a special case of cone programming. Given a closed convex cone $K$ in $\mathbb{R}^n$, we define the dual cone as

$K^*=\{y : \langle x,y\rangle\geq 0~\forall x\in K\}.$

Examples: The positive orthant and the positive semidefinite cone are both self-dual, and the dual of a subspace is its orthogonal complement. Throughout we assume we have $c\in\mathbb{R}^n$, $b\in\mathbb{R}^m$, closed convex cone $K\in\mathbb{R}^n$, closed convex cone $L\in\mathbb{R}^m$, and linear operator $A\colon\mathbb{R}^n\rightarrow\mathbb{R}^m$. We then have the primal and dual programs

$\mbox{(P)} \quad \max \quad \langle c,x\rangle \quad \mbox{s.t.} \quad b-Ax \in L,\quad x \in K$

$\mbox{(D)} \quad \min \quad \langle b,y\rangle \quad \mbox{s.t.} \quad A^\top y-c\in K^*,\quad y\in L^*$

Notice when $L$ and $K$ are both the positive orthant, this is a standard linear program. Further, considering the space of $n\times n$ real symmetric matrices as $\mathbb{R}^{n(n+1)/2}$, when $L=\{0\}$ and $K$ is the positive semidefinite cone, this is a standard semidefinite program. We consider several standard results (weak duality, strong duality, complementary slackness) in terms of the general cone program.

— Weak Duality —

Weak duality is a direct consequence of the definition of the dual cone:

Theorem. Let $x$ and $y$ be a feasible points for (P) and (D) respectively. Then $\langle c,x\rangle \leq \langle b,y\rangle$.

Proof: Since $b-Ax \in L$ and $y\in L^*$, we have

$0\leq \langle b-Ax,y\rangle=\langle b,y\rangle-\langle A^\top y,x\rangle$.

That is, $\langle A^\top y,x\rangle\leq \langle b,y\rangle$. Similarly, $x \in K$ and $A^\top y-c\in K^*$ implies $\langle c,x\rangle \leq \langle A^\top y,x\rangle$. Together, we have $\langle c,x\rangle \leq \langle A^\top y,x\rangle \leq \langle b,y\rangle$.     $\Box$

In the special case of linear programming, it is easy to see how the dual program simply minimizes the upper bounds on $\langle c,x\rangle$ which are provided by the primal constraints. In this way, weak duality should probably be viewed as the motivation for the definition rather than a theorem which follows from the definition.

— Strong Duality —

Strong duality posits that the optimal value of (P) equals that of (D), and is not nearly as immediate. While it appears difficult to geometrically intuit what is happening in the ambient spaces (it’s hard enough trying to visualize the adjoint of $A$), if we lift to higher dimensions, the result follows directly from Farka’s Lemma. A simplified and intuitive version of this lemma states that every vector is either a member of a closed convex cone or there exists a hyperplane separating the vector from the cone.

For example, suppose $b$ is the vector and $AK:=\{Ax:x\in K\}$ is the closed convex cone. Then Farka’s Lemma states that either $b\in AK$, or there exists a vector $y$ such that $\langle y,b\rangle<0$ and $\langle y,Ax\rangle\geq0$ for every $x\in K$, that is, the hyperplane orthogonal to $y$ separates $AK$ from $b$. Notice that $\langle y,Ax\rangle=\langle A^\top y,x\rangle$, and so having this be nonnegative for every $x\in K$ is equivalent to having $A^\top y\in K^*$. This is summarized in the following:

Farka’s Lemma. Suppose $AK$ is a closed convex cone. Then $Ax=b$, $x\in K$ is feasible if and only if every $y$ with $A^\top y\in K^*$ satisfies $\langle y,b\rangle\geq 0$.

Theorem. (P) is feasible and has bounded optimal value $\alpha$ if and only if (D) is feasible and has bounded optimal value $\alpha$.

Proof: Let’s put our problem into a simple form where Farka’s Lemma applies. We’ll start with (P), though the symmetry in the problem means the same type of arguments apply when starting with (D). Let (P) be feasible with finite optimal value $\alpha$. Then

$b-Ax\in L, ~ x\in K \quad \mbox{implies} \quad \langle c,x\rangle \leq \alpha.$

We also claim that

$-Ax\in L, ~ x\in K \quad \mbox{implies} \quad \langle c,x\rangle \leq 0.$

Indeed, otherwise, there would be some vector $u\in K$ such that $-Au\in L$ and $\langle c,u\rangle>0$, and so for any $x$ which is feasible in (P), we would have that $x+ru$ is also feasible in (P) for any $r>0$, but then $\langle c,x+ru\rangle$ would exceed $\alpha$ for $r$ large, contradicting the assumption.

Let $u:=rx$ for some $r\geq0$. Then

$rb-Au\in L, ~ u\in K \quad \mbox{implies} \quad \langle c,u\rangle \leq r\alpha.$

(The $r=0$ case corresponds to the previous display.) Equivalently,

$\left[\begin{array}{rr}-A&b\\ I&0\\ 0&1\end{array}\right]\left[\begin{array}{r}u\\ r\end{array}\right]\in L\oplus K \oplus \mathbb{R}_+ \quad \mbox{implies}\quad \left\langle\left[\begin{array}{r}u\\ r\end{array}\right],\left[\begin{array}{r}-c\\ \alpha\end{array}\right]\right\rangle\geq0$

In this lifted space, we apply Farka’s Lemma to get that the following is feasible:

$\left[\begin{array}{rrr} -A^\top&I&0\\ b^\top&0&1\end{array}\right]\left[\begin{array}{r} y\\ z\\ s\end{array}\right]=\left[\begin{array}{r}-c\\ \alpha\end{array}\right],\quad \left[\begin{array}{r} y\\ z\\ s\end{array}\right]\in L^*\oplus K^*\oplus \mathbb{R}_+.$

The first row yields $A^\top y-c=z\in K^*$, $y\in L^*$ meaning (D) is feasible. The second row yields $\langle b,y\rangle=\alpha-s\leq \alpha$, but $\langle b,y\rangle\geq \alpha$ by weak duality. We conclude $\langle b,y\rangle=\alpha$, thereby implying strong duality.     $\Box$

— Complementary Slackness —

The concept of complimentary slackness comes directly from strong duality. For feasible $x$ and feasible $y$, weak duality gives

$\langle c,x \rangle \leq \langle A^\top y,x \rangle = \langle y,Ax \rangle \leq \langle y,b \rangle ,$

or equivalently, $\langle c-A^\top y,x \rangle \leq 0 \leq \langle y,b-Ax \rangle$. By strong duality, $x$ and $y$ are optimal if and only if these inequalities are equal. That is,

$\langle A^\top y-c,x \rangle =0= \langle y,b-Ax \rangle .$

In this way, the primal variable $x$ is complementary to the dual constraint $A^\top y-c$ just as the dual variable $y$ is complementary to the primal constraint $b-Ax$. These orthogonality relations are sometimes helpful when expressing the optimal $y$ (called the dual certificate) in terms of the optimal $x$.