Nothing Special   »   [go: up one dir, main page]

Open In App

Simulated Annealing

Last Updated : 08 Apr, 2024
Summarize
Comments
Improve
Suggest changes
Like Article
Like
Save
Share
Report
News Follow

Problem : Given a cost function f: R^n –> R, find an n-tuple that minimizes the value of f. Note that minimizing the value of a function is algorithmically equivalent to maximization (since we can redefine the cost function as 1-f).

Many of you with a background in calculus/analysis are likely familiar with simple optimization for single variable functions. For instance, the function f(x) = x^2 + 2x can be optimized setting the first derivative equal to zero, obtaining the solution x = -1 yielding the minimum value f(-1) = -1. This technique suffices for simple functions with few variables. However, it is often the case that researchers are interested in optimizing functions of several variables, in which case the solution can only be obtained computationally. 

One excellent example of a difficult optimization task is the chip floor planning problem. Imagine you’re working at Intel and you’re tasked with designing the layout for an integrated circuit. You have a set of modules of different shapes/sizes and a fixed area on which the modules can be placed. There are a number of objectives you want to achieve: maximizing ability for wires to connect components, minimize net area, minimize chip cost, etc. With these in mind, you create a cost function, taking all, say, 1000 variable configurations and returning a single real value representing the ‘cost’ of the input configuration. We call this the objective function, since the goal is to minimize its value. 
A naive algorithm would be a complete space search — we search all possible configurations until we find the minimum. This may suffice for functions of few variables, but the problem we have in mind would entail such a brute force algorithm to fun in O(n!).

Due to the computational intractability of problems like these, and other NP-hard problems, many optimization heuristics have been developed in an attempt to yield a good, albeit potentially suboptimal, value. In our case, we don’t necessarily need to find a strictly optimal value — finding a near-optimal value would satisfy our goal. One widely used technique is simulated annealing, by which we introduce a degree of stochasticity, potentially shifting from a better solution to a worse one, in an attempt to escape local minima and converge to a value closer to the global optimum. 

Simulated annealing is based on metallurgical practices by which a material is heated to a high temperature and cooled. At high temperatures, atoms may shift unpredictably, often eliminating impurities as the material cools into a pure crystal. This is replicated via the simulated annealing optimization algorithm, with energy state corresponding to current solution.
In this algorithm, we define an initial temperature, often set as 1, and a minimum temperature, on the order of 10^-4. The current temperature is multiplied by some fraction alpha and thus decreased until it reaches the minimum temperature. For each distinct temperature value, we run the core optimization routine a fixed number of times. The optimization routine consists of finding a neighboring solution and accepting it with probability e^(f(c) – f(n)) where c is the current solution and n is the neighboring solution. A neighboring solution is found by applying a slight perturbation to the current solution. This randomness is useful to escape the common pitfall of optimization heuristics — getting trapped in local minima. By potentially accepting a less optimal solution than we currently have, and accepting it with probability inverse to the increase in cost, the algorithm is more likely to converge near the global optimum. Designing a neighbor function is quite tricky and must be done on a case by case basis, but below are some ideas for finding neighbors in locational optimization problems. 

  • Move all points 0 or 1 units in a random direction
  • Shift input elements randomly
  • Swap random elements in input sequence
  • Permute input sequence
  • Partition input sequence into a random number of segments and permute segments

One caveat is that we need to provide an initial solution so the algorithm knows where to start. This can be done in two ways: (1) using prior knowledge about the problem to input a good starting point and (2) generating a random solution. Although generating a random solution is worse and can occasionally inhibit the success of the algorithm, it is the only option for problems where we know nothing about the landscape. 

There are many other optimization techniques, although simulated annealing is a useful, stochastic optimization heuristic for large, discrete search spaces in which optimality is prioritized over time. Below, I’ve included a basic framework for locational-based simulated annealing (perhaps the most applicable flavor of optimization for simulated annealing). Of course, the cost function, candidate generation function, and neighbor function must be defined based on the specific problem at hand, although the core optimization routine has already been implemented.

C++
#include <bits/stdc++.h>
using namespace std;

//c++ code for the above approach
class Solution {
    
    public:
    float CVRMSE;
    vector<int> config;
    Solution(float CVRMSE, vector<int> configuration) {
        this->CVRMSE = CVRMSE;
        config = configuration;
    }
};

// Function prototype 
Solution genRandSol();

// global variables. 
int T = 1;
float Tmin = 0.0001;
float alpha = 0.9;
int numIterations = 100;
int M = 5;
int N = 5;
vector<vector<char>> sourceArray(M, vector<char>(N, 'X'));
vector<int> temp = {};
Solution mini = Solution((float)INT_MAX, temp);
Solution currentSol = genRandSol();

Solution genRandSol() {
    // Instantiating for the sake of compilation
    vector<int> a = {1, 2, 3, 4, 5};
    return Solution(-1.0, a);
}

Solution neighbor(Solution currentSol) {
    return currentSol;
}

float cost(vector<int> inputConfiguration) {
    return -1.0;
}

// Mapping from [0, M*N] --> [0,M]x[0,N]
vector<int> indexToPoints(int index) {
    vector<int> points = {index % M,index/M};
    return points;
}


//Returns minimum value based on optimization
int main(){
    
    while (T > Tmin) {
        for (int i = 0; i < numIterations; i++) {
            // Reassigns global minimum accordingly
            if (currentSol.CVRMSE < mini.CVRMSE) {
                mini = currentSol;
            }
            Solution newSol = neighbor(currentSol);
            float ap = exp((currentSol.CVRMSE - newSol.CVRMSE) / T);
            srand( (unsigned)time( NULL ) );
            if (ap > (float) rand()/RAND_MAX) {
                currentSol = newSol;
            }
        }
        T *= alpha; // Decreases T, cooling phase
    }
    
    cout << mini.CVRMSE << "\n\n";
    
    for (int i = 0; i < M; i++) {
        for (int j = 0; j < N; j++) {
            sourceArray[i][j] = 'X';
        }
    }   
    
    // Displays
    for(int index = 0; index < mini.config.size(); index++){
        int obj = mini.config[index];
        vector<int> coord = indexToPoints(obj);
        sourceArray[coord[0]][coord[1]] = '-';
    }

    // Displays optimal location
    for (int i = 0; i < M; i++) {
        string row = "";
        for (int j = 0; j < N; j++) {
            row = row + sourceArray[i][j] + " ";
        }
        cout << (row) << endl;
    }
}

// The code is contributed by Nidhi goel. 
Java
// Java program to implement Simulated Annealing
import java.util.*;

public class SimulatedAnnealing {

    // Initial and final temperature
    public static double T = 1;

    // Simulated Annealing parameters

    // Temperature at which iteration terminates
    static final double Tmin = .0001;

    // Decrease in temperature
    static final double alpha = 0.9;

    // Number of iterations of annealing
    // before decreasing temperature
    static final int numIterations = 100;

    // Locational parameters

    // Target array is discretized as M*N grid
    static final int M = 5, N = 5;

    // Number of objects desired
    static final int k = 5;


    public static void main(String[] args) {

        // Problem: place k objects in an MxN target
        // plane yielding minimal cost according to
        // defined objective function

        // Set of all possible candidate locations
        String[][] sourceArray = new String[M][N];

        // Global minimum
        Solution min = new Solution(Double.MAX_VALUE, null);

        // Generates random initial candidate solution
        // before annealing process
        Solution currentSol = genRandSol();

        // Continues annealing until reaching minimum
        // temperature
        while (T > Tmin) {
            for (int i=0;i<numIterations;i++){

                // Reassigns global minimum accordingly
                if (currentSol.CVRMSE < min.CVRMSE){
                    min = currentSol;
                }

                Solution newSol = neighbor(currentSol);
                double ap = Math.pow(Math.E,
                     (currentSol.CVRMSE - newSol.CVRMSE)/T);
                if (ap > Math.random())
                    currentSol = newSol;
            }

            T *= alpha; // Decreases T, cooling phase
        }

        //Returns minimum value based on optimization
        System.out.println(min.CVRMSE+"\n\n");

        for(String[] row:sourceArray) Arrays.fill(row, "X");

        // Displays
        for (int object:min.config) {
            int[] coord = indexToPoints(object);
            sourceArray[coord[0]][coord[1]] = "-";
        }

        // Displays optimal location
        for (String[] row:sourceArray)
            System.out.println(Arrays.toString(row));

    }

    // Given current configuration, returns "neighboring"
    // configuration (i.e. very similar)
    // integer of k points each in range [0, n)
    /* Different neighbor selection strategies:
        * Move all points 0 or 1 units in a random direction
        * Shift input elements randomly
        * Swap random elements in input sequence
        * Permute input sequence
        * Partition input sequence into a random number
          of segments and permute segments   */
    public static Solution neighbor(Solution currentSol){

        // Slight perturbation to the current solution
        // to avoid getting stuck in local minimas

        // Returning for the sake of compilation
        return currentSol;

    }

    // Generates random solution via modified Fisher-Yates
    // shuffle for first k elements
    // Pseudorandomly selects k integers from the interval
    // [0, n-1]
    public static Solution genRandSol(){

        // Instantiating for the sake of compilation
        int[] a = {1, 2, 3, 4, 5};

        // Returning for the sake of compilation
        return new Solution(-1, a);
    }


    // Complexity is O(M*N*k), asymptotically tight
    public static double cost(int[] inputConfiguration){

        // Given specific configuration, return object
        // solution with assigned cost
        return -1; //Returning for the sake of compilation
    }

    // Mapping from [0, M*N] --> [0,M]x[0,N]
    public static int[] indexToPoints(int index){
        int[] points = {index%M, index/M};
        return points;
    }

    // Class solution, bundling configuration with error
    static class Solution {

        // function value of instance of solution;
        // using coefficient of variance root mean
        // squared error
        public double CVRMSE;

        public int[] config; // Configuration array
        public Solution(double CVRMSE, int[] configuration) {
            this.CVRMSE = CVRMSE;
            config = configuration;
        }
    }
}
Python3
# PYTHON CODE for the above approach
import random
import math


class Solution:
    def __init__(self, CVRMSE, configuration):
        self.CVRMSE = CVRMSE
        self.config = configuration


T = 1
Tmin = 0.0001
alpha = 0.9
numIterations = 100


def genRandSol():
    # Instantiating for the sake of compilation
    a = [1, 2, 3, 4, 5]
    return Solution(-1.0, a)


def neighbor(currentSol):
    return currentSol


def cost(inputConfiguration):
    return -1.0

# Mapping from [0, M*N] --> [0,M]x[0,N]


def indexToPoints(index):
    points = [index % M, index//M]
    return points


M = 5
N = 5
sourceArray = [['X' for i in range(N)] for j in range(M)]
min = Solution(float('inf'), None)
currentSol = genRandSol()

while T > Tmin:
    for i in range(numIterations):
        # Reassigns global minimum accordingly
        if currentSol.CVRMSE < min.CVRMSE:
            min = currentSol
        newSol = neighbor(currentSol)
        ap = math.exp((currentSol.CVRMSE - newSol.CVRMSE)/T)
        if ap > random.uniform(0, 1):
            currentSol = newSol
    T *= alpha  # Decreases T, cooling phase

# Returns minimum value based on optimization
print(min.CVRMSE, "\n\n")

for i in range(M):
    for j in range(N):
        sourceArray[i][j] = "X"

# Displays
for obj in min.config:
    coord = indexToPoints(obj)
    sourceArray[coord[0]][coord[1]] = "-"

# Displays optimal location
for i in range(M):
    row = ""
    for j in range(N):
        row += sourceArray[i][j] + " "
    print(row)
C#
// C# program to implement Simulated Annealing
using System;
using System.Text;

// Class solution, bundling configuration with error
public class Solution {

    // function value of instance of solution;
    // using coefficient of variance root mean
    // squared error
    public double CVRMSE;

    public int[] config; // Configuration array
    public Solution(double CVRMSE, int[] configuration) {
        this.CVRMSE = CVRMSE;
        config = configuration;
    }
}

public class GFG{
    
    // Initial and final temperature
    public static double T = 1;
 
    // Simulated Annealing parameters
 
    // Temperature at which iteration terminates
    static double Tmin = .0001;
 
    // Decrease in temperature
    static double alpha = 0.9;
 
    // Number of iterations of annealing
    // before decreasing temperature
    static int numIterations = 100;
 
    // Locational parameters
 
    // Target array is discretized as M*N grid
    static int M = 5, N = 5;
 
    // Number of objects desired
    //static int k = 5;
    
    // Generates random solution via modified Fisher-Yates
    // shuffle for first k elements
    // Pseudorandomly selects k integers from the interval
    // [0, n-1]
    public static Solution genRandSol(){
 
        // Instantiating for the sake of compilation
        int[] a = {1, 2, 3, 4, 5};
 
        // Returning for the sake of compilation
        return new Solution(-1.0, a);
    }
    
    // Given current configuration, returns "neighboring"
    // configuration (i.e. very similar)
    // integer of k points each in range [0, n)
    /* Different neighbor selection strategies:
        * Move all points 0 or 1 units in a random direction
        * Shift input elements randomly
        * Swap random elements in input sequence
        * Permute input sequence
        * Partition input sequence into a random number
          of segments and permute segments   */
    public static Solution neighbor(Solution currentSol){
 
        // Slight perturbation to the current solution
        // to avoid getting stuck in local minimas
 
        // Returning for the sake of compilation
        return currentSol;
 
    }
    
    // Complexity is O(M*N*k), asymptotically tight
    public static double cost(int[] inputConfiguration){
 
        // Given specific configuration, return object
        // solution with assigned cost
        return -1.0; //Returning for the sake of compilation
    }
 
    // Mapping from [0, M*N] --> [0,M]x[0,N]
    public static int[] indexToPoints(int index){
        int[] points = {index%M, index/M};
        return points;
    }
    
    static public void Main (){
        // Problem: place k objects in an MxN target
        // plane yielding minimal cost according to
        // defined objective function
 
        // Set of all possible candidate locations
        String[,] sourceArray = new String[M,N];
 
        // Global minimum
        Solution min = new Solution(Double.MaxValue, null);
 
        // Generates random initial candidate solution
        // before annealing process
        Solution currentSol = genRandSol();
 
        // Continues annealing until reaching minimum
        // temperature
        while (T > Tmin) {
            for (int i=0;i<numIterations;i++){
 
                // Reassigns global minimum accordingly
                if (currentSol.CVRMSE < min.CVRMSE){
                    min = currentSol;
                }
 
                Solution newSol = neighbor(currentSol);
                double ap = Math.Pow(Math.E,(currentSol.CVRMSE - newSol.CVRMSE)/T);
                Random rnd = new Random();
                if (ap > rnd.Next(0,1)){
                    currentSol = newSol;
                }   
            }
 
            T *= alpha; // Decreases T, cooling phase
        }
        //Returns minimum value based on optimization
        Console.Write(min.CVRMSE+"\n\n");
 
        for(int i=0;i<M;i++){
            for(int j=0;j<N;j++){
                sourceArray[i,j]="X";
            }
        }
 
        // Displays
        for (int i=0;i<min.config.Length;i++) {
            int obj = min.config[i];
            int[] coord = indexToPoints(obj);
            sourceArray[coord[0],coord[1]] = "-";
        }
 
        // Displays optimal location
        for (int i=0;i<M;i++){
            StringBuilder row = new StringBuilder("");
            for(int j=0;j<N;j++){
                row.Append(sourceArray[i,j]+" ");
            }
            Console.Write(row.ToString()+"\n");
        }
            
    }
}
//This code is contributed by shruti456rawal
JavaScript
//Javascript code for the above approach
class Solution {
    constructor(CVRMSE, configuration) {
        this.CVRMSE = CVRMSE;
        this.config = configuration;
    }
}

let T = 1;
const Tmin = 0.0001;
const alpha = 0.9;
const numIterations = 100;

function genRandSol() {
    // Instantiating for the sake of compilation
    const a = [1, 2, 3, 4, 5];
    return new Solution(-1.0, a);
}

function neighbor(currentSol) {
    return currentSol;
}

function cost(inputConfiguration) {
    return -1.0;
}

// Mapping from [0, M*N] --> [0,M]x[0,N]
function indexToPoints(index) {
    const points = [index % M, Math.floor(index / M)];
    return points;
}

const M = 5;
const N = 5;
const sourceArray = Array.from(Array(M), () => new Array(N).fill('X'));
let min = new Solution(Number.POSITIVE_INFINITY, null);
let currentSol = genRandSol();

while (T > Tmin) {
    for (let i = 0; i < numIterations; i++) {
        // Reassigns global minimum accordingly
        if (currentSol.CVRMSE < min.CVRMSE) {
            min = currentSol;
        }
        const newSol = neighbor(currentSol);
        const ap = Math.exp((currentSol.CVRMSE - newSol.CVRMSE) / T);
        if (ap > Math.random()) {
            currentSol = newSol;
        }
    }
    T *= alpha; // Decreases T, cooling phase
}

//Returns minimum value based on optimization
console.log(min.CVRMSE, "\n\n");

for (let i = 0; i < M; i++) {
    for (let j = 0; j < N; j++) {
        sourceArray[i][j] = "X";
    }
}

// Displays
for (const obj of min.config) {
    const coord = indexToPoints(obj);
    sourceArray[coord[0]][coord[1]] = "-";
}

// Displays optimal location
for (let i = 0; i < M; i++) {
    let row = "";
    for (let j = 0; j < N; j++) {
        row += sourceArray[i][j] + " ";
    }
    console.log(row);
}

Output
-1

X - X X X 
- X X X X 
- X X X X 
- X X X X 
- X X X X 

Time Complexity: O(T * numIterations), where T and numIterations represent the loop counts.
Auxiliary Space: O(M * N + K), where M and N are the dimensions of sourceArray and K represents additional space for variables and instances.
 



Next Article

Similar Reads

Article Tags :
Practice Tags :
three90RightbarBannerImg