So I'm working on an idea that I came up with that _seems_ to work - a way to quickly optimize results around the local optima, and it seems to work. It is no longer a genetic algorithm, but something that will give a *local* optimal result faster, but will take longer to locate solutions in another space.
First, I begin with a broadband random search for an agent of a certain fitness; in my test case, it is to locate an agent that will complete the course under 10 seconds. Once this agent is found, the algorithm will optimize in the local space.
This is not done via crossovers; it is even simpler. The next generation of agent simply starts with the fittest agent of the last generation, and the control values are jittered by a certain delta. This delta is increased when a new generation does not find any fitter results, and is reset to a low value when a new fitter agent is found.
In my tests, the results are quite striking. The modified genetic algorithm yesterday required over 360 generations to find a really fast result; the method mentioned above located an even faster agent under 100 generations.
While running the simulations, the spread of the delta got me thinking about the number of agents looking for a solution.
When the delta is small, the X number of agents cover a certain search region; when the delta gets larger, but with the same number of agents, there are many "spots" where we are blind to the results; there are far more problem space versus agents to run them.
Hence, it is in my opinion that for my particular problem space, with a limited number of agents, the rate of increase of the delta should be smaller, so as to allow the agents more time to search before casting the net wider.
On the other hand, due to the algorithm searching in a very local space, it will take *forever* to relocated to another area with a more optimal solution space (if such a space exists).
I will probably spend another day exploring this, but I want to explore neural networks next :)
No comments:
Post a Comment