### Raw data of the EXPensive Optimisation benchmark library (EXPObench) ###
These are the raw data of the results of several surrogate-based optimisation algorithms, applied to four different real-life expensive optimisation problems using the EXPObench library [link](https://github.com/AlgTUDelft/ExpensiveOptimBenchmark). Included are the computation time used by the algorithm, time spent on evaluating the expensive objective function, and the values of the decision variables and the objective at every iteration and at the optimum.
A file to plot the results is also included, and the plots itself as well.
### Authors ###
Laurens Bliek, Arthur Guijt, Rickard Karlsson, Sicco Verwer, Mathijs de Weerdt
June 2021
Contact: l.bliek@tue.nl
### Data description ###
Four different benchmark problems are included: wind farm layout optimisation (Windwake), Pitzdaily pipe shape optimisation (Pitzdaily), electrostatic precipitator (ESP), and a hyperparameter optimisation problem (HPO).
The following files are present:
- plot_utils_benchmarkpaper.py plots the results
- .png files with plots of the results
- files ending in _summ.csv give a summary of each run
- files ending in _iters.csv contain information of every iteration of each run
The .csv files contain the following columns:
- approach: optimisation algorithm that was used in this run
- problem: name of the benchmark problem
- exp_id: an identifier of the run
- iter_idx: the iteration number
- iter_eval_time: the time it took to evaluate the expensive objective in this iteration
- iter_model_time: the time it took for the algorithm to suggest a new point to evaluate (includes surrogate model learning and acquisition)
- iter_fitness: objective function value obtained at the current evaluated point
- iter_x: current evaluated point
- iter_best_fitness: best objective function value found by the algorithm up until now
- iter_best_x: best point that was evaluated up until now
- total_iters: total number of iterations of this run
- total_time: total time spent on this run by both the algorithm and the expensive objective
- total_model_time: total time spent on this run by the algorithm (includes surrogate model learning and acquisition)
- total_eval_time: total time spent on this run by evaluating the expensive objective
- best_fitness: best objective function found in this run
- best_x: best point that was evaluated in this run
### Example code to reproduce results ###
Using the [repository](https://github.com/AlgTUDelft/ExpensiveOptimBenchmark), the data can be reproduced by running the following:
- `singularity run --writable-tmpfs CFD.sif "python3.7 /home/openfoam/expensiveoptimbenchmark/run_experiment.py --repetitions=10 --out-path=./results/windwake --max-eval=1000 --rand-evals-all=20 windwake -n=5 --file=/home/laurensbliek/example_input.json randomsearch hyperopt smac --deterministic=y donejl mvrsm bayesianoptimization"`
- `sudo singularity run --writable-tmpfs CFD.sif "python3.7 /home/openfoam/expensiveoptimbenchmark/run_experiment.py --repetitions=5 --out-path=./results/pd --max-eval=1000 --rand-evals-all=20 pitzdaily randomsearch hyperopt smac --deterministic=y donejl mvrsm bayesianoptimization"`
- `sudo singularity run --writable-tmpfs CFD.sif "python3.7 /home/openfoam/expensiveoptimbenchmark/run_experiment.py --repetitions=7 --out-path=/root/results/esp --max-eval=1000 --rand-evals-all=24 esp idone randomsearch hyperopt smac --deterministic=y donejl mvrsm bayesianoptimization"`
- `singularity run --writable-tmpfs CFD.sif "python3.7 /home/openfoam/expensiveoptimbenchmark/run_experiment.py --repetitions=10 --out-path=./results/hpo --max-eval=1000 --rand-evals-all=300 hpo --folder=/home/laurensbliek/Laurens/benchmarksurvey/data/fault/ randomsearch hyperopt smac --deterministic=y donejl mvrsm bayesianoptimization"`