[Bbob-discuss] Request for BBOB algorithms.

Nikolaus Hansen Nikolaus.Hansen at lri.fr
Wed Feb 29 00:41:31 CET 2012


On Tue, 28 Feb 2012 00:21:30 +0100, Olivier Teytaud
<olivier.teytaud at gmail.com> wrote:

> Hi Daniel + thanks for this interesting link.
>
> In particular I see that this site includes higher dimensions than BBOB
> (without downloading, changing parameters and rerunning :-) ). For a
> starting project (using optimization, but not focusing on it) I must  
> choose
> which algorithms downloading/implementing/testing, and this test bed  
> might
> be the best approximation for my needs (I do not consider 40-D as a good
> approximation of 40000D).

interesting. May I ask what kind of 40,000-D problem are you trying to
address?


>
> Another (related) remark on BBOB is that computational costs are usually
> given in terms of number of iterations;

this is not correct. Results are given in number of function evaluations.


> computation times are not given in
> the usual graphs; whereas it is an important issue; if you run 40-D only
> and don't show the evolution of the running time, you might not see that
> some algorithms can run in dimension 10 000 and some others can not. This
> is certainly not a negligible issue.

agreed, but I don't understand why you assume silently (or leave us with
the false impression) that computation times are not reported at all.

Let me add a general remark (as we are on the discussion list): I
think it is a great achievement of the EC community to have not adopted
wall clock or CPU time as relevant performance criterion, because

(1) it intermingles algorithm with implementation.

(2) it gives the (disastrous) incentive to spend a large amount of time
into implementation details, because they have a huge effect on this
performance measure.

(3) it renders most results incomparable (because they were done in
different environments)

(4) it is a highly unstable measure and very difficult to reproduce
independently.

In conclusion, it is not only of very limited use, but also has adverse
effects on how researchers spend their valuable time.

However within the setting of black-box optimization it is easy to
disregard CPU time as performance measure, in other scenarios it might not
be as easy or even not possible.

I am entirely with you that for a complete picture, CPU-times over a large
range of dimensions should be reported, as well as the proved (or
conjectured) scaling of internal costs with the dimension. However I don't
see a reason why this should to be done on a large set of test functions.

Cheers,
Niko

-- 
Science is a way of trying not to fool yourself.

The first principle is that you must not fool yourself, and you
are the easiest person to fool. So you have to be very careful
about that. After you've not fooled yourself, it's easy not to
fool other[ scientist]s. You just have to be honest in a
conventional way after that.           -- Richard P.Feynman

Nikolaus Hansen
INRIA, Research Centre Saclay – Ile-de-France
Machine Learning and Optimization group (TAO)
University Paris-Sud (Orsay)
LRI (UMR 8623), building 490
91405 ORSAY Cedex, France
Phone: +33-1-691-56495, Fax: +33-1-691-54240
URL: http://www.lri.fr/~hansen


More information about the bbob-discuss mailing list