Parameter identification

The module myokit.lib.fit provides tools to help with the process of parameter identification: Finding the parameter values for (part of) a model that produce the desired output.

Global optimization methods are provided that use heuristic search methods to explore large search spaces and find reasonable results. Further refinement of these solutions can then be done using a local optimization method. Some information about the quality of the solution can be obtained by fitting a second order polynomial to points near the solution: if it’s a minimum this should result in a hyper-parabola with a critical point at the solution. Methods are provided to evaluate a large number of points in parallel or on a single CPU. These can also be used to perform a brute-force mapping of the parameter space. Finally, some methods are provided to visualize 2D parameter spaces. Because points found using for example a particle search optimization will not be regularly distributed in space, these visualization methods are based on voronoi diagrams.

Global optimization

The methods listed below perform global optimization (pso) or global/local optimization (cmaes, snes, xnes). The cma-es method is an interface to the Python cma module.

myokit.lib.fit.cmaes(f, bounds, hint=None, sigma=None, n=None, ipop=0, parallel=False, fatol=1e-11, target=1e-06, callback=None, verbose=False, args=None)

Global/local optimizer that minimizes a function f within a specified set of bounds using the CMA-ES methods provided by the cma module [1, 2].

CMA-ES stands for Covariance Matrix Adaptation Evolution Strategy, and is designed for non-linear derivative-free optimization problems [1]. To run, the cma module must have been installed (for example via PIP).

A parallel (multiprocessing) version of the algorithm can be run by setting parallel to True. Please keep in mind that the objective function f cannot access any shared memory in this scenario. See ParallelEvaluator for details.

The method will stop when one of the following criteria is met:

  1. Absolute changes in f(x) are smaller than fatol
  2. The value of f(x) is smaller than target.
  3. The maximum number of iterations is reached.

Arguments:

f
A function to minimize. The function f(x) must be callable with x a sequence of m coordinates and should return a single scalar value.
bounds
A list of m tuples (min_i, max_i) specifying the minimum and maximum values in the search space for each dimension i.
hint=None
A suggested starting point. Must be within the given bounds. If no hint is given the center of the parameter space is used. A (uniform) random point can be selected by setting hint='random'.
sigma=None
A suggsted standard deviation; the method works best if the global optimum lies within +/- 3*sigma from the initial guess. If no value for sigma is given, the default value is 1/6 * min(upper - lower), where upper and lower are the boundaries on the parameter space.
n=None
Can be used to manually overrule the population size used in CMA-ES. By default, this is set by CMA-ES to be 4+int(3*np.log(d)) where d is the number of dimensions of the search space. When running in parallel, Myokit will round this up to the nearest multiple of the CPU count. To set manually, use this parameter.
ipop=0
Set to any integer greater than 1 to enable increasing population size restarts (IPOP). This will re-run the simulation ipop times (for a total number of ipop + 1 runs). Each re-run will be from a randomly chosen starting point (within the boundaries) and with the population size increased by a factor 2. The rrecommended value for ipop is either 0 or 8.
parallel=False
Set this to True to run a multi-process version of the search that utilizes all available cores. See EvaluatorProcess for the details of using multi-process parallelisation and the requirements this places on the function f.
fatol
The smallest difference in f(x) between subsequent iterations that is acceptable for convergence.
target=1e-6
The value of f(x) that is acceptable for convergence.
callback=None
An optional function to be called after each iteration with arguments (pg, fg) where pg is the current best position and fg is the corresponding score.
verbose
Set to True to enable logging of all sorts of information into the console window.
args=None
An optional tuple containing extra arguments to f. If args is specified, f will be called as f(x, *args).

The method returns a tuple (xbest, fbest) where xbest is the best position found and fbest = f(xbest).

References:

[1] https://www.lri.fr/~hansen/cmaesintro.html

[2] Hansen, Mueller, Koumoutsakos (2006) Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES).

Note: This method requires the `cma` module to be installed.

myokit.lib.fit.pso(f, bounds, hints=None, n=4, r=0.5, v=0.001, parallel=False, target=1e-06, max_iter=500, hybrid=False, return_all=False, callback=None, callback_particles=None, verbose=False, args=None)

Global optimizer that minimizes a function f within a specified set of bounds using particle swarm optimization (PSO) [1].

In a particle swarm optimization, the parameter space is explored by n independent particles. The particles perform a pseudo-random walk through the parameter space, guided by their own personal best score and the global optimum found so far.

A parallel (multiprocessing) version of the algorithm can be run by setting parallel to True. Please keep in mind that the objective function f cannot access any shared memory in this scenario. See ParallelEvaluator for details.

The method will stop when one of the following criteria is met:

  1. The value of f(x) is smaller than target.
  2. The maximum number of iterations is met (where one “iteration” means each of the n particles has been updated).

Arguments:

f
A function to minimize. The function f(x) must be callable with x a sequence of m coordinates and should return a single scalar value. Alternatively, in hybrid mode (which is disabled by default), f(x) should return a tuple (x2, fx2) where x2 is a new value for x and fx2 is the score function evaluated at x2.
bounds
A list of m tuples (min_i, max_i) specifying the minimum and maximum values in the search space for each dimension i. The returned solution is guaranteed to be within these bounds.
hints
A (possibly empty) list of points to use as initial values. Each point x is specified as a sequence of m coordinates, such as can be passed to f(x). Hints will be used as long as they are available. If there are more hints than particles only the first hints in the sequence will be used. If there are more particles than hints, the remaining particles will be initialized at uniformly random positions in the search space.
n=4
The number of particles to use in the search.
r=0.5
A number between 0 and 1, specifying the ratio between the local attraction (r=1) and global attraction (r=0, see below).
v=1e-3
The maximum initial velocity, as a fraction of the bounded space. By default, the velocity in each direction i is set as vs[i] = U(0, v * (upper[i] - lower[i])), where U is a uniform sampling function, and upper and lower represent the given upper and lower bounds in direction i.
parallel=False
Set this to True to run a multi-process version of the search that utilizes all available cores. See EvaluatorProcess for the details of using multi-process parallelisation and the requirements this places on the function f.
target=1e-6
The method will stop searching when a score below the target value is found.
max_iter=500
The maximum number of iterations to perform.
hybrid=False
Set this to True to perform a hybrid optimization. In this case, the function passed as the first argument should return a tuple (x2, fx2) where x2 is the updated position and fx2 is its score.
return_all=False
Set this to True to return all results, instead of only the best particle.
callback=None
An optional function to be called after each global step with arguments (pg, fg) where pg is the current best position and fg is the corresponding score.
callback_particles=None
An optional function to be called after each global step with arguments (xs, vs, fs) where xs, vs and fs are lists containing the current particle positions, velocities and scores respectively.
verbose=False
Set to True to have progress information printed to the terminal. If set to any non-zero integer this will determine the number of iterations between updates.
args=None
An optional tuple containing extra arguments to f. If args is specified, f will be called as f(x, *args).

The method starts by creating a swarm of n particles and assigning each an initial position and initial velocity (see the explanation of the arguments hints and v for details). Each particle’s score is calculated and set as the particle’s current best local score pl. The best score of all the particles is set as the best global score pg.

Next, an iterative procedure is run that updates each particle’s velocity v and position x using:

v[k] = v[k-1] + al * (pl - x[k-1]) + ag * (pg - x[k-1])
x[k] = v[k]

Here, x[t] is the particle’s current position and v[t] its current velocity. The values al and ag are scalars randomly sampled from a uniform distribution, with values bound by r * 4.1 and (1 - r) * 4.1. Thus a swarm with r = 1 will only use local information, while a swarm with r = 0 will only use global information. The de facto standard is r = 0.5. The random sampling is done each time al and ag are used: at each time step every particle performs m samplings, where m is the dimensionality of the search space.

Pseudo-code algorithm:

almax = r * 4.1
agmax = 4.1 - almax
while stopping criterion not met:
    for i in [1, 2, .., n]:
        if f(x[i]) < f(p[i]):
            p[i] = x[i]
        pg = min(p[1], p[2], .., p[n])
        for j in [1, 2, .., m]:
            al = uniform(0, almax)
            ag = uniform(0, agmax)
            v[i,j] += al * (p[i,j] - x[i,j]) + ag * (pg[i,j]  - x[i,j])
            x[i,j] += v[i,j]

A hybrid PSO method [2] can be implemented by adding the optional argument hybrid=True and changing the score function to return a tuple (x, fx) where x is an updated position and fx is its score. Suggestions for x‘s outside the set boundaries will be ignored.

The method returns a tuple (pg, fg) where pg is the best position found and fg = f(pg). If return_all was set to True, the tuple will contain a vector pg with each particle’s best location and a vector fg with the corresponding scores, sorted best to worst.

References:

[1] Kennedy, Eberhart (1995) Particle Swarm Optimization. IEEE International Conference on Neural Networks

[2] Loewe, Wilhems, Seemann et al. (2016) Parameter estimation of ion current formulations requires hybrid optimization approach to be both accurate and reliable. Frontiers in Bioengineering and Biotechnology

myokit.lib.fit.snes(f, bounds, hint=None, n=None, parallel=False, target=1e-06, max_iter=1000, callback=None, verbose=False, args=None)

Global/local optimizer that minimizes a function f within a specified set of bounds using the SNES method described in [1, 2].

SNES stands for Seperable Natural Evolution Strategy, and is designed for non-linear derivative-free optimization problems in high dimensions and with many local minima [1].

A parallel (multiprocessing) version of the algorithm can be run by setting parallel to True. Please keep in mind that the objective function f cannot access any shared memory in this scenario. See ParallelEvaluator for details.

The method will stop when one of the following criteria is met:

  1. The value of f(x) is smaller than target.
  2. The maximum number of iterations is reached.

Arguments:

f
A function to minimize. The function f(x) must be callable with x a sequence of m coordinates and should return a single scalar value.
bounds
A list of m tuples (min_i, max_i) specifying the minimum and maximum values in the search space for each dimension i.
hint=None
A suggested starting point. Must be within the given bounds. If no hint is given the center of the parameter space is used. A (uniform) random point can be selected by setting hint='random'.
n=None
Can be used to manually overrule the population size used in SNES. By default, this is set to be 4+int(3*np.log(d)) where d is the number of dimensions of the search space. When running in parallel, Myokit will round this default value up to the nearest multiple of the cpu count. To overrule these defaults, use this parameter.
parallel=False
Set this to True to run a multi-process version of the search that utilizes all available cores. See EvaluatorProcess for the details of using multi-process parallelisation and the requirements this places on the function f.
target=1e-6
The value of f(x) that is acceptable for convergence.
max_iter=500
The maximum number of iterations to perform.
callback=None
An optional function to be called after each iteration with arguments (pg, fg) where pg is the current best position and fg is the corresponding score.
verbose=False
Set to True to have progress information printed to the terminal. If set to any non-zero integer this will determine the number of iterations between updates.
args=None
An optional tuple containing extra arguments to f. If args is specified, f will be called as f(x, *args).

The method returns a tuple (xbest, fbest) where xbest is the best position found and fbest = f(xbest).

References:

[1] Schaul, Glasmachers, Schmidhuber (2011) High dimensions and heavy tails for natural evolution strategies. Proceedings of the 13th annual conference on Genetic and evolutionary computation. ACM, 2011.

[2] PyBrain: The Python machine learning library (http://pybrain.org)

myokit.lib.fit.xnes(f, bounds, hint=None, n=None, parallel=False, target=1e-06, max_iter=1000, callback=None, verbose=False, args=None)

Global/local optimizer that minimizes a function f within a specified set of bounds using the xNES method described in [1, 2].

xNES stands for Exponential Natural Evolution Strategy, and is designed for non-linear derivative-free optimization problems [1].

A parallel (multiprocessing) version of the algorithm can be run by setting parallel to True. Please keep in mind that the objective function f cannot access any shared memory in this scenario. See ParallelEvaluator for details.

The method will stop when one of the following criteria is met:

  1. The value of f(x) is smaller than target.
  2. The maximum number of iterations is reached.

Arguments:

f
A function to minimize. The function f(x) must be callable with x a sequence of m coordinates and should return a single scalar value.
bounds
A list of m tuples (min_i, max_i) specifying the minimum and maximum values in the search space for each dimension i.
hint=None
A suggested starting point. Must be within the given bounds. If no hint is given the center of the parameter space is used. A (uniform) random point can be selected by setting hint='random'.
n=None
Can be used to manually overrule the population size used in xNES. By default, this is set to be 4+int(3*np.log(d)) where d is the number of dimensions of the search space. When running in parallel, Myokit will round this default value up to the nearest multiple of the cpu count. To overrule these defaults, use this parameter.
parallel=False
Set this to True to run a multi-process version of the search that utilizes all available cores. See EvaluatorProcess for the details of using multi-process parallelisation and the requirements this places on the function f.
target=1e-6
The value of f(x) that is acceptable for convergence.
max_iter=500
The maximum number of iterations to perform.
callback=None
An optional function to be called after each iteration with arguments (pg, fg) where pg is the current best position and fg is the corresponding score.
verbose=False
Set to True to have progress information printed to the terminal. If set to any non-zero integer this will determine the number of iterations between updates.
args=None
An optional tuple containing extra arguments to f. If args is specified, f will be called as f(x, *args).

The method returns a tuple (xbest, fbest) where xbest is the best position found and fbest = f(xbest).

References:

[1] Glasmachers, Schaul, Schmidhuber et al. (2010) Exponential natural evolution strategies. Proceedings of the 12th annual conference on Genetic and evolutionary computation

[2] PyBrain: The Python machine learning library (http://pybrain.org)

Note: This method requires the `scipy` module to be installed.

Local optimization

The local optimization methods presented here are interfaces to methods found in SymPy (http://sympy.org).

myokit.lib.fit.bfgs(f, x, bounds, max_iter=500, args=None)

Local optimizer that minimizes a function f using the constrained quasi-Newton method of Broyden, Fletcher, Goldfarb, and Shanno provided by SciPy [1,2,3].

Arguments:

f
A function to minimize. The function f(x) must be callable with x a sequence of m coordinates and should return a single scalar value.
x
An initial guess for the x with the lowest f(x).
bounds
A sequence of tuples (xmin, xmax) with the boundaries for each dimension of x.
max_iter
The maximum number of iterations to perform.
args
An optional tuple containing extra arguments to f. If args is specified, f will be called as f(x, *args).

The method returns a tuple (xopt, fopt) where xopt is the best position found and fopt = f(xopt).

References:

[1] http://scipy.org

[2] A Limited Memory Algorithm for Bound Constrained Optimization. Byrd, R H and P Lu and J. Nocedal (1995) SIAM Journal on Scientific and Statistical Computing 16 (5): 1190-1208.

[3] L-BFGS-B: Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained optimization. Zhu, C and R H Byrd and J Nocedal (1997) ACM Transactions on Mathematical Software 23 (4): 550-560.

Note: This method requires SciPy to be installed.

myokit.lib.fit.nelder_mead(f, x, xatol=0.0001, fatol=0.0001, max_iter=500, args=None)

Local optimizer that minimizes a function f using the Nelder-Mead simplex method provided by SciPy [1,2].

The method will stop when one of the following criteria is met:

  1. Absolute changes in x are smaller than xatol _and_ absolute changes in f(x) are smaller than fatol
  2. The maximum number of iterations is reached.

Arguments:

f
A function to minimize. The function f(x) must be callable with x a sequence of m coordinates and should return a single scalar value.
x
An initial guess for the x with the lowest f(x).
xatol
Sets the _absolute_ difference in x between iterations that is acceptable for convergence.
fatol
Sets the _absolute_ difference in f(x) between iterations that is acceptable for convergence.
max_iter
The maximum number of iterations to perform.
args
An optional tuple containing extra arguments to f. If args is specified, f will be called as f(x, *args).

The method returns a tuple (xopt, fopt) where xopt is the best position found and fopt = f(xopt).

References:

[1] http://scipy.org

[2] A Simplex Method for Function Minimization. Nelder, J A, and R Mead (1965) The Computer Journal 7: 308-13.

Note: This method requires SciPy to be installed.

myokit.lib.fit.powell(f, x, xtol=0.0001, ftol=0.0001, max_iter=500, args=None)

Local optimizer that minimizes a function f using the implementation of Powell’s conjugate direction method provided by SciPy [1, 2].

Powell created a number of derivative free optimization methods. The method used here is one of the earliest, and is described in [2].

The method will stop when one of the following criteria is met:

  1. Relative changes in x are smaller than xtol
  2. Relative changes in f(x) are smaller than ftol
  3. The maximum number of iterations is reached.

Arguments:

f
A function to minimize. The function f(x) must be callable with x a sequence of m coordinates and should return a single scalar value.
x
An initial guess for the x with the lowest f(x).
xtol
Sets the _relative_ difference in x between iterations that is acceptable for convergence.
ftol
Sets the _relative_ difference in f(x) between iterations that is acceptable for convergence.
max_iter
The maximum number of iterations to perform.
args
An optional tuple containing extra arguments to f. If args is specified, f will be called as f(x, *args).

The method returns a tuple (xopt, fopt) where xopt is the best position found and fopt = f(xopt).

References:

[1] http://scipy.org

[2] An efficient method for finding the minimum of a function of several variables without calculating derivatives. Powell, M. J. D. (1964) Computer Journal 7 (2): 155-162. doi:10.1093/comjnl/7.2.155

Note: This method requires SciPy to be installed.

Checking minima

The following function can be used to investigate if points in the parameter space are (local) minima. Unfortunately, they often fail in complex search spaces.

myokit.lib.fit.quadfit(x, y)

Calculates the unique quadratic polynomial through a set of points.

The argument y must be a sequence of m scalars, while x should contain m points of dimension n > 0.

The function calculates A, B and C such that

y[k] = A + B’ * x[k] + (1/2) * x[k]’ * C * x[k]

for k = 1,2,...,m. Here A is a scalar, B is a column vector of size n and C is a symmetric n by n matrix.

To get a solvable system, the number of points m must equal the number of unknowns in A, B and C, such that:

m = 1 + n + sum(1,2,...,n)
  = 1 + n + n * (n + 1) / 2
  = (n / 2 + 1) * (n + 1)
  = (n + 1) * (n + 2) / 2

For example, for the simplest case where n = 1 we get m = 3. For quadratic polynomials on a two-dimensional space we get n = 2 so we need m = 6 data points.

Arguments:

x
A sequence of m points, each of the same dimension n. Using numpy, this can also be given as an m by n matrix.
y
A sequence of m scalars.

Output is a tuple (A, B, C):

A
A scalar.
B
A vector of shape (n, )
C
A symmetrical matrix of shape (n, n).

Example 1:

def f(x):
    a = [7, -3, 2]
    return a[0] + a[1] * x + a[2] * x**2
x = [-2, 1, 6]
y = [f(i) for i in x]
A, B, C = quadfit(x, y)    # Returns 7, [-3] and [[2]]

Example 2:

def f(x, y):
    a = 5, 4, 3, 1, -2, -4
    return a[0] + a[1]*x + a[2]*y + a[3]*x**2 + a[4]*x*y + a[5]*y**2
x = [[-2, -1], [-1,3], [0,-1], [1,2], [2,2], [3,-4]]
y = [f(*i) for i in x]
A, B, C = quadfit(x, y)    # Returns 5, [4, 3] and [[2, -2], [-2, -8]]

Example 3:

def f(x, y, z):
    a = 3, 2, 1, -1, -6, 5, 4, 3, 2, 1
    return (a[0] + a[1]*x + a[2]*y + a[3]*z
        + a[4]*x**2 + a[5]*x*y + a[6]*x*z
        + a[7]*y**2 + a[8]*y*z
        + a[9]*z**2)
x = [[-2, -1, 0], [-1,2,3], [0,2,-1], [1,1,2], [2,2,2], [-1,3,-4],
     [4,2,-1], [4,1,2], [4,2,2], [1,2,3]]
y = [f(*i) for i in x]
A, B, C = quadfit(x, y)
myokit.lib.fit.quadfit_count(n)

Given a dimensionality of the input space m, this method returns the number of points needed to construct a quadratic polynomial:

qaudfit_count(n) = (n + 1) * (n + 2) / 2
myokit.lib.fit.quadfit_crit(A, B, C)

Given an A, B and C such as returned by quadfit(), this method calculates the critical point for the polynomail described by:

A + B' * x + (1/2) * x' * C * x

Arguments:

A
A scalar.
B
A vector of shape (n, )
C
A symmetric matrix of size (n x n).

The output is a point in n-dimensional space.

myokit.lib.fit.quadfit_minimum(A, B, C)

Given an A, B and C such as returned by quadfit(), this method tests if the polynomial is strictly convex by testing if the symmetrical matrix C is positive definite. This is done by attempting to calculate a Cholesky decomposition, which will only succeed if C is positive definite.

The method returns True if the quadratic polynomial is strictly convex.

Parameter space exploration

The following methods can be used for (parallelised) exploration of the parameter space.

myokit.lib.fit.map_grid(f, bounds, n, parallel=False, args=None)

Maps a parameter space by evaluating every point in a rectangular grid.

Arguments:

f
A function to map. The function f(x) must be callable with x a sequence of m coordinates and should return a single scalar value.
bounds
A list of m tuples (min_i, max_i) specifying the minimum and maximum values in the search space for each dimension i. The mapped space will be within these bounds.
n
The number of points to sample in each direction. If n is a scalar the function will map a grid of n points in each direction, so that the total number of points is n**m, where m is the dimensionality of the search space. Alternatively, the number of points in each dimension can be specified by passing in a length m sequence of sizes, so that the total number of points mapped is n[0] * n[1] * ... * n[m-1].
parallel
Set to True to run evaluations on all available cores.
args
An optional tuple containing extra arguments to f. If args is specified, f will be called as f(x, *args).

Returns a tuple (x, fx) where x is a numpy array containing all the tested points and fx contains the calculated f(x) for each x.

myokit.lib.fit.evaluate(f, x, parallel=False, args=None)

Evaluates the function f on every value present in x and returns a sequence of evaluations f(x[i]).

To run the evaluations on all available cores, set parallel=True. For details see ParallelEvaluator.

Extra arguments to pass to f can be given in the optional tuple args. If used, f will be called as f(x[i], *args).

class myokit.lib.fit.Evaluator(function, args=None)

Abstract class

Interface for classes that take a function (or callable object) f(x) and evaluate it for list of input values x. This interface is shared by a parallel and a sequential implementation, allowing easy switching between parallel or sequential implementations of the same algorithm.

Arguments:

function
A function or other callable object f that takes a value x and returns an evaluation f(x).
args
An optional tuple containing extra arguments to f. If args is specified, f will be called as f(x, *args).
evaluate(positions)

Evaluate the function for every value in the sequence positions.

Returns a list with the returned evaluations.

class myokit.lib.fit.SequentialEvaluator(function, args=None)

Extends: Evaluator

Evaluates a function (or callable object) for a list of input values.

Runs sequentially, but shares an interface with the ParallelEvaluator, allowing parallelism to be switched on/off.

Arguments:

function
The function to evaluate.
args
An optional tuple containing extra arguments to f. If args is specified, f will be called as f(x, *args).

Returns a list containing the calculated function evaluations.

evaluate(positions)

Evaluate the function for every value in the sequence positions.

Returns a list with the returned evaluations.

class myokit.lib.fit.ParallelEvaluator(function, nworkers=None, max_tasks_per_worker=500, args=None)

Extends: Evaluator

Evaluates a single-valued function object for any set of input values given, using all available cores.

Shares an interface with the SequentialEvaluator, allowing parallelism to be switched on and off with minimal hassle. Parallelism takes a little time to be set up, so as a general rule of thumb it’s only useful for if the total run-time is at least ten seconds (anno 2015).

By default, the number of processes (“workers”) used to evaluate the function is set equal to the number of CPU cores reported by python’s multiprocessing module. To override the number of workers used, set nworkers to some integer greater than 0.

There are two important caveats for using multiprocessing to evaluate functions:

  1. Processes don’t share memory. This means the function to be evaluated will be duplicated (via pickling) for each process (see Avoid shared state for details).
  2. On windows systems your code should be within an if __name__ == '__main__': block (see Windows for details).

Arguments:

function
The function to evaluate
nworkers
The number of worker processes to use. If left at the default value nworkers=None the number of workers will equal the number of CPU cores in the machine this is run on. In many cases this will provide good performance.
max_tasks_per_worker
Python garbage collection does not seem to be optimized for multi-process function evaluation. In many cases, some time can be saved by refreshing the worker processes after every max_tasks_per_worker evaluations. This number can be tweaked for best performance on a given task / system.
args
An optional tuple containing extra arguments to the objective function.

The evaluator will keep it’s subprocesses alive and running until it is tidied up by garbage collection.

Note that while this class uses multiprocessing, it is not thread/process safe itself: It should not be used by more than a single thread/process at a time.

evaluate(positions)

Evaluate the function for every value in the sequence positions.

Returns a list with the returned evaluations.

Loss surface visualisation (2d)

The methods below can be used to visualise loss (score, penalty, fitness, etc.) surfaces in two dimensions. Because most search/exploration methods return unevenly spaced points, a Voronoi diagram representation is used.

myokit.lib.fit.loss_surface_colors(x, y, f, xlim=None, ylim=None, markers='+')

Takes irregularly spaced points (x, y, f) and creates a 2d colored Voronoi diagram.

This method is useful to visualize the output of an optimisation routine on a 2-dimensional parameter space.

Most 2d plotting methods (like matplotlib’s contour and surf, or mayavi’s

default plotting options) accept only regularly spaced (x, y) grids as input. Unfortunately, the points sampled by an optimisation routine are typically dense in some areas and sparse in some. While a plot can still be created by interpolating from these points, this may add or remove vital detail. The voronoi color plot provides two-dimensional equivalent of a zero-order hold graph.

This method returns a matplotlib figure of a 2d loss surface, represented as a Voronoi diagram with colored surfaces.

Most 2d plotting functions expect a regular 2d grid as input for the x and y coordinates. When estimating the shape of a loss surface it may be much more accurate to sample densely in some areas and sparsley in others. Interpolating this to a regular grid causes loss of detail and an unsatisfactory representation. This method attempts to get around this issue by creating a voronoi diagram of the sampled points and shading the region around each point according to its loss-function value. This is essentially a kind of 2d zero hold representation of the loss function.

Arguments:

x
A 1d array of x coordinates.
y
A 1d array of y coordinates.
f
A 1d array of loss function evaluations, calculated at (x, y).
xlim
A tuple (xmin, xmax) representing the possible range of x values. If set to None an automatic range will be computed.
ylim
A tuple (xmin, xmax) representing the possible range of x values. If set to None an automatic range will be computed.
markers
The markers to use to plot the sampled points. Set to None to disable.

Returns a matplotlib figure.

Note: This method requires Matplotlib to be installed.

myokit.lib.fit.loss_surface_mesh(x, y, f)

Takes irregularly spaced points (x, y, f) and creates a mesh using the Mayavi package and Delaunay triangulation.

This method is useful to visualize the output of an optimisation routine on a 2-dimensional parameter space.

Note: this method requires Mayavi to be installed.

myokit.lib.fit.voronoi_regions(x, y, f, xlim, ylim)

Takes a set of (x, y, f) points and returns the edgepoints of the x-y voronoi region around each point within the bounds specified by xlim and ylim.

Points and voronoi regions entirely outside the specified bounds will be dropped. Voronoi regions partially outside the bounds will be truncated.

The third array f will be filtered the same way as x and y but is otherwise not used.

Returns a tuple (x, y, f, regions) where x, y and f are the coordinates of the accepted points and each regions[i] is a list of the vertices making up the voronoi region for point (x[i], y[i]).

The code to extract the voronoi regions was (heavily) adapted from: http://stackoverflow.com/a/20678647/423420

Note: This method requires SciPy to be installed.