5  Linear optimization

In this chapter we will show how to implement linear optimization problems using Python. The goal is to model optimization problems in Python and then solve them using Gurobi, which is a software package dedicated to solving optimization problems.

NotePrerequisites

This chapter assumes familiarity with Sections 3.1-3.5 of this online book.

In the next section we explain how to install Gurobi on your personal laptop, and how to connect it to Anaconda. On the university computers Gurobi is already installed and configured correctly, so if you are using such a computer, you can skip the next section.

5.1 Gurobi installation

It is assumed that you already have Python installed on your system. Otherwise, you need to install Python first. The Anaconda distribution is a good choice. If you get stuck anywhere in the installation process, then you can find more information on Gurobi’s Quick Start Guide.

5.1.1 Register at Gurobi

Visit the Gurobi website and click on the register button. Open the registration form and make sure that you select the Academic account type, and select Student for the academic position.

5.1.2 Download Gurobi

Download Gurobi from the website. Note that you need to login with your Gurobi account before you can download Gurobi. Select the distribution that corresponds to your system (e.g., Windows or macOS), and select the regular “Gurobi Optimizer”, not any of the AMPL variations. Unless mentioned otherwise, download the most recent version.

5.1.3 Install Gurobi

Run the installer and follow the installation steps. At some point, the installer may ask whether to add Gurobi to your execution path. This is probably useful to accept.

5.1.4 Gurobi license

You cannot use Gurobi without a license, so you need to apply for a license. As a student you can request a free academic license; take the Named-User Academic one. After you have obtained the license, you need to activate it for your Gurobi installation. If you open the license details on the Gurobi website, you can see what you need to do: open a command prompt and run the grbgetkey command with the code that corresponds to your license. This command will create your license file. Make sure that you remember where the license file is saved. The default location is probably the best choice.

Note that the grbgetkey command will check that you are on an academic domain, so you need to perform this step on the university network (possibly via a VPN connection). Once installed correctly, you can also run Gurobi without an active VPN connection.

5.1.5 Using Gurobi in Python

Gurobi should now be correctly installed, but we also want to be able to use it from Python. Therefore, we need to install Gurobi’s Python package. Open your Anaconda prompt (if you have the Anaconda installation) and run the following commands.

  • conda config --add channels http://conda.anaconda.org/gurobi
  • conda install gurobi

If you don’t have the Anaconda installation, then you can do something similar with the pip command.

5.1.6 Test your installation

Now you can test whether everything is setup correctly. Open an interactive Python session, for instance using Jupyter Notebook. Try the following commands:

from gurobipy import Model
model = Model()
Set parameter Username
Set parameter LicenseID to value 2715549
Academic license - for non-commercial use only - expires 2026-09-29

If both these commands succeed, then you are done.

If the first command fails, then the Gurobi python module has not been installed correctly. If the second command fails, then the license has not been setup correctly (make sure the license file is at the right location).

If at any point, you need more information about Gurobi, then you can always go to the official Gurobi documention online.

5.2 Basics

All the relevant functionality needed to use Gurobi via Python is contained in the gurobipy package. For our purposes, we only need two modules from this package:

  • The Model module that we will use to build optimization problems
  • The GRB module that contains various “constants” that we use to include words such as maximize and minimize, as well as binary and integer.

In Python you can include the modules by adding the following line at the top of a script.

from gurobipy import Model, GRB

5.2.1 Example: Duplo problem

To illustrate how to implement optimization problems in Python, we consider the Duplo problem from the lectures: \begin{array}{llll} \text{maximize}& 15x_1 + 20x_2 && \text{(profit)} \\ \text{subject to} &x_1 + 2x_2 \leq 6 &&\text{(big bricks)} \\ &2x_1 + 2x_2 \leq 8 &&\text{(small bricks)} \\ &x_1, x_2 \geq 0 \end{array} with decision variables

  • x_1: Number of chairs
  • x_2: Number of tables

Let us implement the Duplo problem in Gurobi.

Model object

To start, we will create a variable (or object) model that will contain all the information of the problem, such as the decision variables, objective function and constraints.

In programming terms, we create an object from the Model() class. We can give the instance a name by adding the name keyword argument, with value 'Duplo problem' in our case. We add this name with quotations so that Python knows this is plain text.

# Initialize Gurobi model.
model = Model(name='Duplo problem')

You can think of the Model class object model as a large box that we are going to fill with decision variables, an objective function, and constraints. We will also reserve a small space in this box to store information about the optimal solution to the linear optimization problem, once we have optimized the problem.

The concept of having “objects” is in fact what the programming language Python is centered around. This is known as object-oriented programming, which you will learn about more later in your first full programming course. You can have a look, for example, here already if you are interested in this concept.

Decision variables

To add decision variables, an objective function, and constraints, to our Model object we use so-called methods which are Python functions.

To create a decision variable, we can use the addVar() method. As input argument, we include a name for the decision variables.

# Declare the two decision variables.
x1 = model.addVar(name='chairs')
x2 = model.addVar(name='tables')

If your Model object is not called model, but for example model_Duplo, then you should use model_Duplo.addVar(name='chairs') instead. The same applies for all later methods that are introduced. Never call a model Model (with capital M) because this spelling is reserved for the Model() class in Gurobi.

When creating a variable, you can specify more properties. For example, we might have an upper and lower bound for a decision variable (such as a nonnegativity constraint). You can use the keyword arguments lb and ub, respectively, for this.

To see all these properties, you can check out the addVar documentation by typing help(Model.addVar) in Python. Looking up the documentation in this way can be done for every method.

help(Model.addVar)
Help on cython_function_or_method in module gurobipy._model:

addVar(self, lb=0.0, ub=1e+100, obj=0.0, vtype='C', name='', column=None)
    ROUTINE:
      addVar(lb, ub, obj, vtype, name, column)

    PURPOSE:
      Add a variable to the model.

    ARGUMENTS:
      lb (float): Lower bound (default is zero)
      ub (float): Upper bound (default is infinite)
      obj (float): Objective coefficient (default is zero)
      vtype (string): Variable type (default is GRB.CONTINUOUS)
      name (string): Variable name (default is no name)
      column (Column): Initial coefficients for column (default is None)

    RETURN VALUE:
      The created Var object.

    EXAMPLE:
      v = model.addVar(ub=2.0, name="NewVar")

If we would want to create a variable, with name test variable, having lower bound 3 and upper bound 7, we can do this with x = model.addVar(lb=3,ub=7,name='test variable') using in particular the lb and ub keyword arguments.

As you can see above, all the keyword arguments have default values, meaning that if we do not specify them, Gurobi uses the specified default value for them. For example, by default a variable is continuous (vtype keyword argument) and non-negative (lb keyword argument), so we do not have to specify these for the Duplo exmpale, because there we have x_1, x_2 \geq 0.

Also note that ub is set to 10^{100} which roughly speaking indicates that the decision variable has no upper bound value by default.

Objective function

Now that we have our decision variables, we can use them to define the objective function and the constraints.

We use the setObjective() method to do so by refering to the variables created above. The sense keyword argument must be used to specify whether we want to maximize or minimize the objective function. For this, we use GRB.MAXIMIZE or GRB.MINIMIZE, respectively.

# Specify the objective function.
model.setObjective(15*x1 + 20*x2, sense=GRB.MAXIMIZE)

Constraints

The next step is to declare the constraints using the addConstr() method. Also here, we can specify the constraints refering to the variables x1 and x2. We give the constraints a name as well.

# Add the resource constraints on bricks.
model.addConstr(x1 + 2*x2 <= 6, name='big-bricks')
model.addConstr(2*x1 + 2*x2 <= 8, name='small-bricks')
<gurobi.Constr *Awaiting Model Update*>

Now the model is completely specified, we are ready to compute the optimal solution. We can do this using the optimize() method applied to our model model.

# Optimize the model
model.optimize()
Gurobi Optimizer version 12.0.3 build v12.0.3rc0 (win64 - Windows 10.0 (19045.2))

CPU model: 12th Gen Intel(R) Core(TM) i7-1265U, instruction set [SSE2|AVX|AVX2]
Thread count: 10 physical cores, 12 logical processors, using up to 12 threads

Optimize a model with 2 rows, 2 columns and 4 nonzeros
Model fingerprint: 0xadc88607
Coefficient statistics:
  Matrix range     [1e+00, 2e+00]
  Objective range  [2e+01, 2e+01]
  Bounds range     [0e+00, 0e+00]
  RHS range        [6e+00, 8e+00]
Presolve time: 0.00s
Presolved: 2 rows, 2 columns, 4 nonzeros

Iteration    Objective       Primal Inf.    Dual Inf.      Time
       0    3.5000000e+31   3.500000e+30   3.500000e+01      0s
       2    7.0000000e+01   0.000000e+00   0.000000e+00      0s

Solved in 2 iterations and 0.02 seconds (0.00 work units)
Optimal objective  7.000000000e+01

We can see some output from Gurobi, and the last line tells us that Gurobi found an optimal solution with objective value 70.

To summarize all the step above, we have given the complete code below (but not executed this time).

from gurobipy import Model, GRB

# Initialize Gurobi model.
model = Model(name='Duplo problem')

# Declare the two decision variables.
x1 = model.addVar(name='chairs')
x2 = model.addVar(name='tables')

# Specify the objective function.
model.setObjective(15*x1 + 20*x2, sense=GRB.MAXIMIZE)

# Add the resource constraints on bricks.
model.addConstr(x1 + 2*x2 <= 6, name='big-bricks')
model.addConstr(2*x1 + 2*x2 <= 8, name='small-bricks')

# Optimize the model
model.optimize()

Recalling our linear optimization problem object as being a “box” filled with decision variables, an objective function and constraints, the box now also contains a small space where the optimal solution is stored.

Let us have a look at how to access information about the optimal solution. The objective function value of the optimal solution, that was also displayed in the output when optimization the model with model.optimize() is stored in the ObjVal attribute. An attribute is a piece of information about an object in Python.

# Print text 'Objective value' and the model.ObjVal attribute value
print('Objective value:', model.ObjVal)
Objective value: 70.0

The optimal values of a decision variables can be obtained by accessing the X attribute of a variable.

print('x1 = ', x1.X)
print('x2 = ', x2.X)
x1 =  2.0
x2 =  2.0

If the model has many variables, printing variables in this way is a cumbersome approach. It is then easier to iterate over all variables of the model using the getVars() method. This method creates a list of all the decision variables of the model.

print(model.getVars())
[<gurobi.Var chairs (value 2.0)>, <gurobi.Var tables (value 2.0)>]

As you can see above, we already see the optimal values of the decision variables, but also a lot of unnecessary information.

Let us print these values in a nicer format by iterating over the list model.getVars() using a for-loop with index variable var; you can choose another name than var for the index variable if you want. We print for every variable var its name, stored in the attribute VarName, and its value, stored in the attribute X. In between, we print the = symbol in plain text (hence, the quotations).

Recall that if you want to print multiple data types (such as a variables and text) in a print() statement, you should separate them with commas.

for var in model.getVars():
    print(var.VarName, "=", var.X)
chairs = 2.0
tables = 2.0
TipExercise 1

M\text{\'e}decins sans Fronti\text{\`e}res (MSF) (Dutch: Artsen zonder Grenzen) wants to build medical kits to use in region with an epidemic. The following constraints should be taken into account.

  • MSF has raised € 6400 to build two types of medical medical kits: vaccination and surgical kits. Assembling a surgical kit costs € 320; a vaccination kit € 210.
  • MSF has in total 80 labour hours available in which employees can build these kits. Building a surgical kit requires 2.5 labour hours; a vaccination kit requires 4.5 labour hours.
  • At most 15 vaccination kits can be built.

A linear optimization problem that maximizes the total number of produced kits is given below. The decision variables are the number of surgical kits (x_1) and the number of vaccination kits (x_2) to assemble.

\begin{array}{lrrrrrrl} \max & z = & x_{1} &+ &x_{2} & & & \ \ \ \textit{total number of kits}\\ \text{s.t. } & & 320x_1 &+ & 210x_2& \leq & 6400 & \ \ \ \textit{budget constraint}\\ && 2.5x_1 &+ & 4.5x_2& \leq & 80 & \ \ \ \textit{labor hours constraint}\\ && & & x_2& \leq & 15 & \ \ \ \textit{at most $15$ vac. kits}\\ & &x_1, & & x_2 & \geq & 0 & \ \ \ \textit{nonneg. constraints} \end{array}

Implement this model in Python with a Model object called model_msf, with decision variable whose keyword arguments for name are 'Surgical kits' and 'Vaccination kits', respectively. Optimize your model.

If you include the following lines at the end of your code, you should get the output as indicated.

# Replace model_msf by chosen name of Model object if different.
# For example, if your object is called model, then use model.getVars() instead
for var in model_msf.getVars():
    print(var.VarName, "=", var.X)
Surgical kits = 13.114754098360653
Vaccination kits = 10.491803278688531

As you can see in the output of Exercise 1, the optimal values of the decision variables are not integer, although this is a desired property because we cannot build, e.g., 13.11 kits. We will later see how you can enforce the decision variables to be integer-valued as well. You can ignore this shortcoming for now.

TipExercise 2

A brewery produces beer in Haarlem and Eindhoven (plants), and ships to Amsterdam, Amersfoort, Gouda, Den Bosch and Breda (customers).

We can ship beer units from the plants to the customers, taking into account transportation costs, demand of the customers and supply of the plants. In the table below we have indicated the following input data:

  • Unit transport costs (in euros) from the plants (Haarlem, Eindhoven) to the customers (Amsterdam, Breda, Gouda, Amersfoort, Den Bosch);
  • Demand of each customer: what needs to at least be delivered;
  • Maximum supply capacity of each plant: what can at most be supplied.
A’dam Breda Gouda A’foort Den Bosch Supply
Haarlem 131 405 188 396 485 47
Eindhoven 554 351 479 366 155 63
Demand 28 16 22 31 12

We define decision variables x_{pc} that model the number of units shipped from plant p to customer c, with the meaning of the indices p and c as follows:

  • Plants: p = 1 for Haarlem, p = 2 for Eindhoven.
  • Customers: c = 1 for Amsterdam, c = 2 for Breda, c = 3 for Gouda, c = 4 for Amersfoort, c = 5 for Den Bosch.

A linear optimization problem that minimizes the total transportation costs subject to the demand and supply constraints is given below.

\begin{array}{rll} \min & \displaystyle 131x_{11} + 405x_{12} + 188x_{13} + 396x_{14} + 485x_{15} + \\ & 554x_{21} + 351x_{22} + 479x_{23} + 366x_{24} + 155x_{25} & \\ {\rm s.t.} & \displaystyle x_{11} + x_{12} + x_{13} + x_{14} + x_{15} \leq 47 & \textit{supply Haarlem}\\ & \displaystyle x_{21} + x_{22} + x_{23} + x_{24} + x_{25} \leq 63 & \textit{supply Eindhoven} \\ & \displaystyle x_{11} + x_{21} \geq 28 & \textit{demand A'dam} \\ & \displaystyle x_{12} + x_{22} \geq 16 & \textit{demand Breda} \\ & \displaystyle x_{13} + x_{23} \geq 22 & \textit{demand Gouda} \\ & \displaystyle x_{14} + x_{24} \geq 31 & \textit{demand A'foort} \\ & \displaystyle x_{15} + x_{25} \geq 12 & \textit{demand Den Bosch} \\ & x_{11}, x_{12}, x_{13},x_{14},x_{15},x_{21},x_{22},x_{23},x_{24},x_{25} \geq 0 & \textit{ship nonneg. amounts} \end{array}

Implement this model in Python with a Model object called model_beer. In the name keyword argument of a decision variable, indicate the plant-customer combination it represents (for example, you could define x11 = model_beer.addVar(name='Haarlem-Amsterdam') for the decision variable x_{11} modeling the Haarlem-Amsterdam combination).

If you include the following lines at the end of your code, you should get the output as indicated.

for var in model_beer.getVars():
    print(var.VarName, "=", var.X)
Haarlem-Amsterdam = 28.0
Haarlem-Breda = 0.0
Haarlem-Gouda = 19.0
Haarlem-Amersfoort = 0.0
Haarlem-Den Bosch = 0.0
Eindhoven-Amsterdam = 0.0
Eindhoven-Breda = 16.0
Eindhoven-Gouda = 3.0
Eindhoven-Amersfoort = 31.0
Eindhoven-Den Bosch = 12.0

As you might have experienced in Exercise 2, once the number of decision variables and/or constraints grows, it becomes more tedious to write out the full problem in Python. Luckily, there are more efficient ways to implement the above problem, that only require a couple of lines of code, even if there are many more plants and customers! This is beyond the scope of this course.

5.2.2 Infeasible models

Not every linear optimization problem has an optimal solution. For example, the problem

\begin{array}{lrrrrrrl} \min & z = & 2x_{1} &+ &x_{2} & & & \\ \text{s.t. } & &x_1 &+ & x_2& \leq & -1 & \\ & &x_1, & & x_2 & \geq & 0 & \end{array}

has no feasible solution, because the sum of two nonnegative numbers (x_1,x_2 \geq 0) can never sum up to something smaller or equal than -1 (x_1 + x_2 \leq -1). The implementation of this problem is given below. It can be seen from the output (on the last line) that the model is indeed infeasible.

# Initialize Gurobi model.
model = Model(name='Infeasible model')

# Declare the two decision variables.
x1 = model.addVar() # We leave out the name keyword argument
x2 = model.addVar() 

# Specify the objective function.
model.setObjective(x1 + x2, sense=GRB.MINIMIZE)

# Add the resource constraints on bricks.
model.addConstr(x1 + x2 <= -1)

# Optimize the model
model.optimize()
Gurobi Optimizer version 12.0.3 build v12.0.3rc0 (win64 - Windows 10.0 (19045.2))

CPU model: 12th Gen Intel(R) Core(TM) i7-1265U, instruction set [SSE2|AVX|AVX2]
Thread count: 10 physical cores, 12 logical processors, using up to 12 threads

Optimize a model with 1 rows, 2 columns and 2 nonzeros
Model fingerprint: 0xf5807d19
Coefficient statistics:
  Matrix range     [1e+00, 1e+00]
  Objective range  [1e+00, 1e+00]
  Bounds range     [0e+00, 0e+00]
  RHS range        [1e+00, 1e+00]
Presolve removed 0 rows and 2 columns
Presolve time: 0.00s

Solved in 0 iterations and 0.02 seconds (0.00 work units)
Infeasible model

You can look up the type of solution to a problem in the Status attribute of a model.

model.Status
3

The above output number is not informative right away. The number represents the Optimization Status Code, whose meaning you can look up in Gurobi’s online documentation.

TipExercise 3

When implementing and optimizing the following problem \begin{array}{lrrrrrrl} \max & z = & 2x_{1} &+ &x_{2} & & & \\ \text{s.t. } & &x_1 &- & x_2& \leq & 5 & \\ & &x_1, & & x_2 & \geq & 0 & \end{array}, Python returns an Optmization Status Code of 4. Look up in the online documentation, more specifically the Reference Manual, what this means.

5.3 Integer variables

If you are not interested in finding fractional solutions, but want to enforce the decision variables to take integer values, you can do this with the vtype keyword argument.

Consider the following linear optimization problem (without integrality constraints).

\begin{array}{lrrrrrrl} \max & z = & 2x_{1} &+ &x_{2} & & & \\ \text{s.t. } & &x_1 &+ & x_2& \leq & 2.5 & \\ & &x_1, & & x_2 & \geq & 0 & \end{array},

# Initialize Gurobi model.
model_frac = Model(name='Fractional problem')

# Suppress Gurobi Optimizer output
model_frac.setParam('OutputFlag',0)

# Declare the two decision variables.
x1 = model_frac.addVar(name="x1") 
x2 = model_frac.addVar(name="x2") 

# Specify the objective function.
model_frac.setObjective(2*x1 + x2, sense=GRB.MAXIMIZE)

# Add the resource constraints on bricks.
model_frac.addConstr(x1 + x2 <= 2.5)

# Optimize the model
model_frac.optimize()

# Print optimal values of decision variables
for var in model_frac.getVars():
    print(var.VarName, "=", var.X)
x1 = 2.5
x2 = 0.0

If we specify vtype=GRB.INTEGER in the addVar() method when creating the variables, Gurobi will find an optimal solution with all decision variables being integers.

# Initialize Gurobi model.
model_int = Model(name='Integer problem')

# Suppress Gurobi Optimizer output
model_int.setParam('OutputFlag',0)

# Declare the two decision variables.
x1 = model_int.addVar(name="x1",vtype=GRB.INTEGER) 
x2 = model_int.addVar(name="x2",vtype=GRB.INTEGER) 

# Specify the objective function.
model_int.setObjective(2*x1 + x2, sense=GRB.MAXIMIZE)

# Add the resource constraints on bricks.
model_int.addConstr(x1 + x2 <= 2.5)

# Optimize the model
model_int.optimize()

# Print optimal values of decision variables
for var in model_int.getVars():
    print(var.VarName, "=", var.X)
x1 = 2.0
x2 = -0.0

Anoter common case is where the decision variables are supposed to be binary, meaning they can only take values in \{0,1\}. Setting variables to be binary can be done with vtype=GRB.BINARY.

# Initialize Gurobi model.
model_bin = Model(name='Binary problem')

# Suppress Gurobi Optimizer output
model_bin.setParam('OutputFlag',0)

# Declare the two decision variables.
x1 = model_bin.addVar(name="x1",vtype=GRB.BINARY) 
x2 = model_bin.addVar(name="x2",vtype=GRB.BINARY) 

# Specify the objective function.
model_bin.setObjective(2*x1 + x2, sense=GRB.MAXIMIZE)

# Add the resource constraints on bricks.
model_bin.addConstr(x1 + x2 <= 2.5)

# Optimize the model
model_bin.optimize()

# Print optimal values of decision variables
for var in model_bin.getVars():
    print(var.VarName, "=", var.X)
x1 = 1.0
x2 = 1.0
TipExercise 4

Take your solution from Exercise 1, and modify it so that the decision variables only take integer values. Optimize the model again.

Your output should now be as follows:

# Print optimal values of decision variables
for var in model_msf.getVars():
    print(var.VarName, "=", var.X)
Surgical kits = 9.0
Vaccination kits = 15.0

5.4 Beyond the basics

You can also do more complicated things with gurobipy such as defining many decision variables with one command, or use a for-loop to create many constraints in one go.

To illustrate these concepts, we will write a compact code to solve a general standard form problem with m constraints and n decision variables: \min \mathbf{c'x} subject to \mathbf{Ax} = \mathbf{b}, \mathbf{x} \geq \mathbf{0} where \mathbf{c} \in \mathbb{R}^n, \mathbf{b} \in \mathbb{R}^m, \mathbf{A} \in \mathbb{R}^{m \times n}, and x = (x_1,\dots,x_{n}).

More explicitly, the problem is as follows:

\begin{array}{rrrcrrlll} \min & c_1 x_1 &+& \cdots& + & c_{n}x_{n}& & & \\ {\rm s.t.} & a_{1}x_1& +& \cdots& +& a_{1n} x_{n} &= &b_1 &\\ & & & \vdots & & & & \\ & a_{m1}x_1& +& \cdots& +& a_{mn} x_{n} &= &b_{m}& \\ & x_1, & & \cdots& & x_{n} & \geq & 0 & \\ \end{array}

Suppose we are given the following input data in Numpy arrays (see the Linear Algebra chapter).

import numpy as np

# Objective function coefficients
c = np.array([3, 1, 7, 2, -1]) # n = 5

# Matrix A (m = 3, n = 5)
A = np.array([
    [2, 1, 0, 3, -1],
    [1, 0, 4, 0, 2],
    [0, 3, -2, 1, 1]
])

# Right-hand side vector b
b = np.array([10, 12, 5]) # m = 3

Decision variables

To create the decision variables x_1,\dots,x_5, we can use addVar() five times, but this is rather cumbersome, especially if we would have many more decision variables.

Instead we can use the related method addVars() that allows us to create many variables simultaneously.

# We define m and n here for convenience later
m = 3 # Alternatively, m = len(b)
n = 5 # Alternatively, n = len(c)

# Create Model object
model = Model("Standard form problem")

# Add decision variables
x = model.addVars(n)

With the syntax x = model.addVars(n) we tell Python to create n (= 5) decision variables. You can use the i-th decision variable to define constraints and the objective function by indexing x at position i-1, i.e, by using x[i-1]. Recall that Python starts counting from zero when indexing data objects such as lists.

If you want you can also add a list of names for the decision variables using the name keyword argument. Make sure that the number of elements in the list matches the value of n.

# Add decision variables
x = model.addVars(n,name=["Var1","Var2","Var3","Var4","Var5"])

Objective function

We continue with creating the objective function by indexing the variables in x and the array c containing the objective function coefficients.

# Create objective function
model.setObjective(x[0]*c[0] + x[1]*c[1] + x[2]*c[2] + x[3]*c[3] + x[4]*c[4])

We can do this even more compactly by using the quicksum() function, which has to be imported from gurobipy. In the code below we do two things (recall that n = 5):

  1. Create the list y = [x[0]*c[0], x[1]*c[1], x[2]*c[2], x[3]*c[3], x[4]*c[4]] using the concept of list comprehension.
  2. Use the quicksum() function to add up the elements in the list y and set the resulting quantity as our objective function.
from gurobipy import quicksum

# Create objective function
y = [x[j]*c[j] for j in range(n)] # List with terms of the objective function
model.setObjective(quicksum(y)) # Sum up the terms in the list y

The part [x[j]*c[j] for j in range(n)] is the compact list comprehension syntax for carrying out the piece of code below where we repeatedly append the terms x[j]*c[j] as elements to an (initially empty) list y using the append() method for a list.

y = [] # Start with empty list y
for j in range(n): # Iterated over j = 0,1,2,...,n-1
    y.append(x[j]*c[j]) # Append the element x[j]*c[j] to the list y 

Note that these computations are actually still quite abstract up until this point, because the decision variables have no actual values yet! You should think of these more as symbolic computations.

Constraints

We continue with adding the equality constraints. We use a for-loop to loop over the equalities a_{i1}x_1 + \dots + a_{in}x_n = b_i for i = 1,\dots,m (note that Python starts counting at 0, though). Note that the array A is a list of lists, and that A[i] corresponds to the coefficients a_{i1},\dots,a_{in}.

print(A[2]) # Last row i = 2 of the matrix A
[ 0  3 -2  1  1]

Furthermore, A[i][j] is then the j-th element on the i-th row.

print(A[2][3]) # Last row i = 2 and element j = 3 of the matrix A
1

For a fixed i, the list [A[i][j]*x[j] for j in range(n)] then contains the coefficients of the i-th row multiplied by the variables in x, i.e, [a_{i1}x_1, ..., a_{in}x_n]. Applying quicksum() to this list then computes the expression a_{i1}x_1 + \dots + a_{in}x_n. This should equal b_i, so we model the constraint by quicksum([A[i][j]*x[j] for j in range(n)]) == b[i]. This is done below.

# Create constraints
for i in range(m):
    y = [A[i][j]*x[j] for j in range(n)]
    model.addConstr(quicksum(y) == b[i])

All together, our code looks as below. The beauty of this is that even if the input data arrays A, b and c get larger, the code below does not.

import numpy as np
from gurobipy import Model, GRB, quicksum

# Objective function coefficients
c = np.array([3, 1, 7, 2, -1]) # n = 5

# Matrix A (m = 3, n = 5)
A = np.array([
    [2, 1, 0, 3, -1],
    [1, 0, 4, 0, 2],
    [0, 3, -2, 1, 1]
])

# Right-hand side vector b
b = np.array([10, 12, 5]) # m = 3

# We define m and n here for convenience later
m = 3
n = 5

# Create Model object
model = Model("Standard form problem")

# Suppres Gurobi output message
model.setParam('OutputFlag',0)

# Add decision variables
x = model.addVars(n,name=["Var1","Var2","Var3","Var4","Var5"])

# Create objective function
model.setObjective(quicksum(x[j]*c[j] for j in range(n)))

# Create constraints
for i in range(m):
    model.addConstr(quicksum(A[i][j]*x[j] for j in range(n)) == b[i]) 

# Optimize the model
model.optimize()

# Print optimal values of decision variables
for var in model.getVars():
    print(var.VarName, "=", var.X)
Var1 = 4.75
Var2 = 0.0
Var3 = 0.0
Var4 = 1.3749999999999998
Var5 = 3.625

The critical reader, however, might note that, if we want to include the variable names with the list ["Var1","Var2","Var3","Var4","Var5"], then this list would need to get longer when n increases.

You can replace this list by the list comprehension [f"Var{i+1}" for i in range(n)] to quickly generate the list ["Var1","Var2", ..., "Varn"]. This comprehension loops over the values i = 0,1\dots,n-1.

The part f"Var{i+1}" is called a formatted string (of text). It allows us to include the value of the expression i+1 in plain text (we do +1 because we want the variable names to start at 1). The value you want to include in plain text should be put in curly brackets {}, and you have to add an f in front of the string of text in quotations so that Python knows it should format the value of i+1 into text.

Now we get a code which truly is independent of the values of m and n. That is, for any input data A,b and c, the code below creates a model to solve the standard form problem.

import numpy as np
from gurobipy import Model, GRB, quicksum

# Objective function coefficients
c = np.array([3, 1, 7, 2, -1]) # n = 5

# Matrix A (m = 3, n = 5)
A = np.array([
    [2, 1, 0, 3, -1],
    [1, 0, 4, 0, 2],
    [0, 3, -2, 1, 1]
])

# Right-hand side vector b
b = np.array([10, 12, 5]) # m = 3

# We define m and n here for convenience later
m = 3 
n = 5

# Create Model object
model = Model("Standard form problem")

# Suppres Gurobi output message
model.setParam('OutputFlag',0)

# Add decision variables
x = model.addVars(n,name=[f"Var{i+1}" for i in range(n)])

# Create objective function
model.setObjective(quicksum(x[j]*c[j] for j in range(n)))

# Create constraints
for i in range(m): 
    model.addConstr(quicksum(A[i][j]*x[j] for j in range(n)) == b[i]) 

# Optimize the model
model.optimize()

# Print optimal values of decision variables
for var in model.getVars():
    print(var.VarName, "=", var.X)
Var1 = 4.75
Var2 = 0.0
Var3 = 0.0
Var4 = 1.3749999999999998
Var5 = 3.625