Complete Communications Engineering

Data networks have traditionally been implemented as a set of layers, each layer independent of the others. This layering concept has been recognized as a very important reason for the success of the internet. However, even though the layers are treated individually during design, the parameter choices at one layer could have drastic consequences at other layers. Network utility maximization (also called layering as optimized decomposition) addresses this by presenting the entire layered stack as a single global utility optimization problem. The network can then be designed as a distributed solution to this optimization problem.

Though there are many different versions of the utility function used to optimize a network, most are a simplification of this general network utility maximization function (GNUM), given as

maximize
Us(xs,Pe,s) +Vj(wj)
sj
subject to:Rxc(w,Pe),
xC1(Pe)C2(F),
RR,FF,wW.

Where xs denotes the rate from source s, wj denotes the physical characteristics of network element j. R is a routing matrix, F is a MAC contention matrix and Pe is the probability of error. R is the set of feasible routing matrices, F is the set of feasible contention based MAC schemes and W is the set of feasible physical layer resource allocation schemes. U and V are utility functions of interest.

The most famous application of a NUM is the reverse engineering of TCP Reno as an optimization problem. If a NUM is posed as the maximization of a utility which is a function of the transmission rates, the transmission rates are the optimization variables and the constraints are the basic capacity constraints, it can be shown that a distributed iterative solution to the NUM is TCP Reno. This is important for two reasons. First, it shows that TCP Reno is an optimal rate allocation algorithm. Second, it gives insight into the mathematical equations which govern the congestion of a network.

The difficult question in all NUM problems is how to distributively solve the equation in such a was that the result in meaningful. Unfortunately, there is no simple answer, since an approach which works well for one network formulation may be infeasible in another. One common approach is to optimize the dual problem, rather than the primal. In this case, the dual variables can often be looked at as the price of the constraint. For example, in the case of TCP Reno, the dual variable is multiplied by the sum of the data rates over a link minus the capacity of that link. Ideally, we would like this difference to be zero so that the data rates are as high as they can be without causing congestion. As this difference deviates from zero, the duel variable can be seen as the price for that deviation.