A federated learning algorithm, running on DCP, where all the workers return at the same time, regardless of how much they've trained.
Stars
1
Forks
0
Watchers
1
Open Issues
0
Overall repository health assessment
^1.0.6^2.7.0^2.7.0^2.8.5^4.1.13^2.1.8^3.1.8^4.17.1^1.2.9^1.1.048
commits
V4 had issues transmitting paramters to workers, fixed the issue by using the shared input paramter, rather than the slice parameter
48c3d80View on GitHubchanged main.js to give the training time and work function on completion. Switching testing to the V4 scheduler, highlighting the results collected on V3.
ed828ffView on GitHubthird of the way through the first trial of the third work functin, commiting before I leave
84a4f46View on GitHubfixed a memory leak issue that was crashing long-time workers. Completed the first trials for the first two work functions
fe2c979View on GitHub