I've been spending alot of time with the Walk Forward optimizer lately and was wondering how to best to automate figuring out the optimal time periods for the walk forward.
Setting up the parameters in the start;end;step format is great, but having to manually adjust the optimization period through a small number to larger every time, means I need to maintain records of the run each time...is there an easier way to try multiple optimization time periods in the same manner in which we run through the parameters?
If not, then how best to "guess" what the best time periods are for the walk forward? Is there a study or white paper somewhere that shows data beyond a certain time frame is considered irrelevant? (I've read alot of papers along these lines and haven't seen something like that yet...)
Historically when I've optimized, I took slices of the time period (e.g. each month in a year) and ran the backtests on each month, giving me a sample of 12 data points for the performance. Do this over 3 years, you've 36 data points and have a legitimate statistical sample (i.e. avg PF, std dev PF, avg period profit, etc.) This has been a great way to generalize a model to historical data. Currently, though, this requires me running 36 different back tests, though, to get the data. This is where an open API would be very helpful - allow people to create their own version of optimization, which is a very broad academic topic!
Any help is appreciated and happy new year!
Comment