Found 28 repositories(showing 28)
JJJerome
mbt_gym is a module which provides a suite of gym environments for training reinforcement learning (RL) agents to solve model-based high-frequency trading problems such as market-making and optimal execution. The module is set up in an extensible way to allow the combination of different aspects of different models. It supports highly efficient implementations of vectorized environments to allow faster training of RL agents.
FernandoDeMeer
No description available
No description available
raehyuns
No description available
Nik212
No description available
matteoga98
No description available
No description available
matvei-lukianov
No description available
Ethan0991
Master's Thesis by Ethan Chemla, University Paris Dauphine - PSL (MSc IASD), in collaboration with the Execution Algo Team at Natixis
DieterFishLi
Optimal execution with an RL approach. Test with China's stock data.
elSomewhere
No description available
eddiesung111
A tick-level Reinforcement Learning environment for optimal trade execution. Implements PyTorch DDQN and Tabular Q-Learning agents on real-world Limit Order Book (LOB) data to minimize empirical slippage and outperform TWAP/Almgren-Chriss baselines.
Ethancohenn
No description available
MMillward2012
BSc Mathematics dissertation: Deep Q-Learning for optimal execution in financial markets
suenot
No description available
Aysel71
Comparison of different algorithms for optimal execution problem and new RK algorithm
itspaspas
RL agents for minimizing Implementation shortfall. Outperforms Almgren-Chriss by dynamically adapting to market drift, volatility, and microstructure impacts.
amirhossein-izadi
RL agents for minimizing Implementation shortfall. Outperforms Almgren-Chriss by dynamically adapting to market drift, volatility, and microstructure impacts.
Alqama-svg
Deep Reinforcement Learning for Optimal Trade Execution using DQN and Baseline Strategy Comparison
No description available
artmjs
RL agent for optimal order execution
No description available
koriyoshi2041
Market Microstructure & Optimal Execution — Hawkes process simulation, L2 order book, PPO RL execution agent vs Almgren-Chriss
minhannscyberspace
Risk-aware RL system for optimal trade execution, minimizing implementation shortfall under market impact and microstructure constraints.
pablowilliams
Optimal Trade Execution Research Platform — Almgren-Chriss, Obizhaeva-Wang & PPO Deep RL with live React dashboard. UCL MSc Business Analytics. Calibrated to LOBSTER AAPL order book data.
riwaj666
SplitRL is an RL-based framework that learns optimal layer split points for running deep neural networks across edge devices. It models execution and network delays using lookup tables, compares learned vs optimal splits, and visualizes latency and throughput for multiple CNN architectures.
ssrhaso
Hierarchichal RL framework for intraday optimal trade execution. Implements a two-layer architecture: Strategic PPO agent selects execution pace, Tactical DQN agent optimises order slicing and timing All experiments conducted on simulated market data with reproducible seeds for controlled benchmarking. Outperforms traditional algorithms.
XabiBlaz
Optimal algorithmic trade execution with RL: Almgren–Chriss, TWAP/VWAP, and a PPO agent that learns atop the Almgren–Chriss framework inside a stochastic impact environment with hard liquidation constraints; CVaR/mean costs benchmarked end-to-end, with Optuna tuning plus reproducible MLflow-logged metrics, artifacts, and pipeline.
All 28 repositories loaded