The last month I've been spending a short while each day coding up an exploration into Genetic Algorithms. Being the King of Procrastination, I've decided to blog about what I've completed over the weekend, and my next steps.
So, basically my problem is the optimal selection of values for agents to traverse a short course in the shortest time. Several parameters are available, such as maximum forward speed, maximum rotation speed, length of the lookahead sensors, spread of the sensors etc.
We can probably figure out what a good setting is, but the GA will allow us to explore this big problem space on its own, and perhaps evolve a solution that is not logical, but will work very well.
I got the GA working on Monday, and, depending on the run, is able to seek out solutions to the problem. One of the lucky runs found a really fast solution, but the exploration space was not as focused as I liked.
One of the reasons is the way I generate the next generation of agents; they are simply created from roulette wheel selection of the fittest, but it seems like I need to look more carefully into the roulette function as the random function I am using _may_ be more gaussian than uniform distribution, so there is a bias to which parents are chosen. More checking is needed.
While I was pondering about this, I have decided to try a more "directed" kind of GA:
First, I will allow the GA to run, and then choose an "elite" agent, one that fits a certain set of parameters. In my case, my target is to have agents complete the course under 5 seconds (the GA I have now, evolved agents that did it in 4.6 seconds). So, I'll just evolve them as before, *until* I get an agent that can not only complete the course, but can do it under 10 seconds.
(Assuming the GA's run is not soo lucky it gets under 5 seconds on the first run. As the first generation is randomly generated, I've seen some really fast results on the first generation... but I digress.)
This "elite" agent is then kept on, and used as a starting point for future generations. It does not directly enter the pool, but will be used as the one of the two agents to be mated for the next generation. Talk about some serious polygamy here.
The current elite is only replaced by one of its descendents when the descendent has a higher fitness score i.e. faster.
This of course leads to a more localized search of the solution, since it will evolve around the elite agent. To search further, mutation is probably the only way to jump to another possibly more optimal area.
At the moment, my mutation rate varies depending on the spread of the scores of the agents; if the agents have similar scores, the mutation rate jumps to a higher one, in order to search further. If the scores are wide spread, lower mutation will occur.
The theory I have right now is to have a low mutation rate once an elite agent is found, so the GA will explore the nearby solution space for more optimum solutions. As time goes on, the mutation rate will increase if no improved solutions are found; this will, in theory, allow the GA to gradually search further afield. Perhaps, even after X number of generations, the elite idea can be dropped and the GA can go back into the original exploration of the space without being "led" by an elite agent.
No comments:
Post a Comment