Over the holidays, I was working on implementing my algorithm into my code, since most subroutines were closely linked and used large amounts of data, it was difficult to test until it was all done. So I wrote all the code and then tested, and bug fixed, and once it was working as intended, I realised that the computer can't handle the amount of data I was pulling through (200 million numbers and growing exponentially). Which is why I needed a moderating function to be the genome attribution that conveys inputs to output rather than an output for every input. This would mean much fewer data and much more math, specifically massive matrice math which I don't have a strong enough knowledge on. Luckily there is a library called NEAT-python that does a lot of it for me, so I'll be trying to use their functions to better implement my neural network, which means staying away from my algorithm a bit, but otherwise, it won't work.
This past week I've been working on fixing issues with the program to make it run as smoothly as possible. I didn't think I would get the stats for nerds page done and was going to scrap it so I could focus on getting the documentation done and fixing the minor issues so it would be more polished but I decided to do the stats for nerds page anyway and spent the past day and a half working on it. The notable issues from its development were mainly around gathering the data, in which I found some other errors in how I was collecting data such as not updating the data files when quitting the program. But i managed to finish the page as well as the rest of the program. My final solution was reasonably successful in comparison to my proposal and design, the only main things that I wasn't able to achieve was customisation of genome numbers and generation size and stuff like that, but that would have required a lot of digging through files in the libraries I was using to edit th...
Comments
Post a Comment