[Bbob-discuss] Request for BBOB algorithms.

Nikolaus Hansen Nikolaus.Hansen at lri.fr
Wed Feb 29 00:12:50 CET 2012


On Tue, 28 Feb 2012 00:09:12 +0100, Olivier Teytaud
<olivier.teytaud at gmail.com> wrote:

> Dear Niko,
>
> No, it's not that I am not entirely satisfied, or more precisely it's  
> just
> that I think there
> is always room for improvement :-)
>
>  In particular, when I downloaded raw data and made a few graphs, I got
> very different results from the graphs I've seen in BBOB, so I felt I

If I understand correctly (I am not sure though) you say that the raw
data and the graphs produced by the BBOB/COCO post-processing do not
agree? Is this what you meant? This would be a serious issue we would need
to address, obviously! Please clarify!


> should take time for downloading codes, reproducing experiments by  
> varying
> experimental conditions, to see stability of results.
>
[...]
> :-) . Maybe I was not aware of the introduction of these
> non-zero-noise-at-optimum because the link
> http://coco.gforge.inria.fr/lib/exe/fetch.php?id=bbob-2009-downloads&cache=cache&media=download2.0:bbobdocnoisyfunctions.pdf
> on http://www.lri.fr/~hansen/publications.html is dead :-)

thanks for pointing to the dead link.


>
>
>  Yes, I know that we can change the dimension very quickly in the code,
> obviously.
> The point is just that usually you don't present this kind of result.  
> This
> is precisely
> why I'd like to run this, but if I write the interfacing myself, it will
> not be very fair. This
> is why I'd find that interesting, for testing the stability of  
> algorithms,
> to have the exact conditions in which the algorithms are run.
>
> I agree that it's not E log(fitness), I was imprecise; but log(ERT) vs
> log(running time) is somehow emphasizing long term results and very  
> precise
> results, somehow similarly to E log(fitness-fitness*) vs E(running time).

I don't understand what you mean by log(ERT) vs log(running time).

However to say that the log emphasize long term results is wrong. The
log always emphasizes values closer to zero (ie small values). It
emphasizes small values for Delta-fitness, as you say correctly, and
it also emphasize small values for running time, for ERT, for number
of f-evaluations (those which are the measures we use), for...

Here is why: by displaying the log you put the same emphasize on running
times between 1 and 10 as between 10,000 and 100,000. Without taking the
log, the running times between 1 and 10 would be completely invisible
compared to (or dominated by) those between 10,000 and 100,000.

The log makes it possible to display all results (long and short term) in
a single picture, exactly by putting away emphasize from long term result.


>
> For the 32bits/64 bits: some years ago, one of the organizers told me  
> that
> some of the algorithms used by the organizers were run in 32bits, and  
> some in 64 bits.

And so what?

Cheers,
Niko


-- 
Science is a way of trying not to fool yourself.

The first principle is that you must not fool yourself, and you
are the easiest person to fool. So you have to be very careful
about that. After you've not fooled yourself, it's easy not to
fool other[ scientist]s. You just have to be honest in a
conventional way after that.
                                                                                                 --
Richard
P.
Feynman

Nikolaus Hansen
INRIA, Research Centre Saclay – Ile-de-France
Machine Learning and Optimization group (TAO)
University Paris-Sud (Orsay)
LRI (UMR 8623), building 490
91405 ORSAY Cedex, France
Phone: +33-1-691-56495, Fax: +33-1-691-54240
URL: http://www.lri.fr/~hansen


More information about the bbob-discuss mailing list