[Bbob-discuss] Request for BBOB algorithms.
Nikolaus.Hansen at lri.fr
Mon Feb 27 18:31:21 CET 2012
thanks for your suggestion. Let me make a few comments and rectify some of
your claims, as you seem not to be very familiar with the COCO/BBOB
On Fri, 24 Feb 2012 09:23:22 +0100, Olivier Teytaud
<olivier.teytaud at gmail.com> wrote:
> Hi all;
> incidentally, maybe it would be good that all implementations were
> made freely available,
> including the interfacing with BBOB ?
> (I know this is not what you ask for, Kevin; I just take this
> opportunity for discussing this)
> Implementations sometimes differ from published algorithms due to many
> small tricks;
> results would be by far easier to reproduce if it was a constraint of
> BBOB that a submitted entry was accompanied by a .tar.gz so that
> anyone can re-run the experiments.
indeed, this year we strongly encourage the participants to also submit
their code that can reproduce the experiments. It will, for the time
being, not be a hard constraint though. Maybe we should give certificate
labels, like green if the code runs and reproduces the results, yellow
when it was submitted but not yet tested etc... ;-)
> It would also be helpful for reproducing the tests in frameworks which
> are not part of BBOB, like
> - high-dimension (10 000, 100 000...)
Fortunately, BBOB results show the scaling of the algorithms with the
dimension (which makes it a bit distinct from most other frameworks).
This is IMHO the most useful display and we have so far seen results
up to 40-D.
If you want to check whether the observed scaling holds in larger
dimensions you can do so with COCO/BBOB by changing one line in the
calling script. The only (but serious) obstacle will be computational
resource. IMHO the best way to do so is using dimensions like e.g.
2, 3, 5, 10, 20, 40, 80, 160, 320, 640, 1280, ..., 40960 (again a one
line change). This will take less time then a single experiment in
100,000-D, but you collect much more useful data, even if you run out
of time or resources before the experiment is finished.
Let me emphasize this: I discourage to do experimentation *only* in large
dimension, and I even believe it's a serious experimental mistake, unless
it is impossible to do otherwise (say the function is not available in
I don't think there are many non-separable functions in the BBOB
testbed you can solve in reasonable time in 100,000-D (and on separable
functions the exercise does not look terribly interesting).
The post-processing of such data would demand a few modifications
in the provided code (it's open source). If we get more than two
submissions with dimension up to at least 160, we will help
to make the necessary adaptations in the post-processing for these
ambitious submitters ASAP.
You also might want to check out the CEC special session on large-scale
global optimization, if this is your particular interest or check out
> - noisy optimization (without variance decreasing to zero)
we do have 30 noisy functions, ten of them are without decreasing
variance to zero, namely with constant variance. The eight most
difficult functions have a variance decreasing to zero rather slowly.
> - other criteria (e.g. expected fitness rather than expected
> log-fitness; this makes quite a
> big difference)
we neither use expected fitness nor expected log-fitness, never.
I feel that both is meaningless in general. We rather assume that
fitness is defined on the ordinal scale only (which implies that
taking the expectation is meaningless as ordinal data cannot be
added). We use expected number of function evaluations (which we
denote as expected running time ERT) as performance measure.
> - number of iterations reduced, for removing the problem of 32bits vs
> 64bits libraries
I do not know of such a problem with BBOB, so please let us know what
problem you are talking about!
In any case, the numbers of iterations are not regulated in
COCO/BBOB, but entirely up to the algorithm designer and experimenter.
You can submit an algorithm that only does two iterations, if you wish.
This again differentiates the COCO/BBOB experimental setup from most
If an algorithm is not designed for a large number of iterations,
we suggest to do several independent runs within each trial, unless
the function is solved. This increases the "overall number of runs"
and the amount of collected data to a comparable level and the
chance to solve the more difficult functions. The downside is that
termination criteria become part of the algorithm design.
Arnold Neumaier suggested (AFAIR) that this should rather be
interpreted as an advantage, because it forces algorithm designers to
contemplate over termination issues (which are of some practical
> I know it's too late for BBOB 2012, but I guess it would increase by
> far the diffusion of BBOB if something like that was done.
I am not sure I understand what you mean by "something like that".
Your original point of reproducibility is well taken, yet not entirely
in the hands of the BBOBies.
I am sensing from your comments that you are, for some reason, not
entirely satisfied with the BBOB functions. In this case, I have good
news: we plan to make it easier to link other testbeds with the
COCO/BBOB framework (it is certainly possible already now).
> Best regards,
> 2012/2/24, Kevin Tierney <kevt at itu.dk>:
>> Dear all,
>> I am doing research into algorithm portfolios on continuous problems,
>> and am trying to assemble a portfolio of high-quality algorithms,
>> particularly evolutionary methods to compliment algorithms such as those
>> in the NLopt library.
>> If anyone is willing to send me source code that they have used for past
>> competitions, I would be extremely grateful.
>> I will, of course, keep your source code confidential, and I also want
>> to make clear that I will not be entering the 2012 BBOB competition.
>> Thank you for your assistance,
>> Kevin Tierney
>> PhD Student
>> IT University of Copenhagen
>> bbob-discuss mailing list
>> bbob-discuss at lists.lri.fr
Science is a way of trying not to fool yourself.
The first principle is that you must not fool yourself, and you
are the easiest person to fool. So you have to be very careful
about that. After you've not fooled yourself, it's easy not to
fool other[ scientist]s. You just have to be honest in a
conventional way after that.
INRIA, Research Centre Saclay – Ile-de-France
Machine Learning and Optimization group (TAO)
University Paris-Sud (Orsay)
LRI (UMR 8623), building 490
91405 ORSAY Cedex, France
Phone: +33-1-691-56495, Fax: +33-1-691-54240
More information about the bbob-discuss