The Cowork Agent for Everything — trainable advertising AI + 14 platform MCP servers + agent skills. Based on minimind (42k stars). Train from zero in 2 hours.
Stars
3
Forks
0
Watchers
3
Open Issues
2
Overall repository health assessment
No package.json found
This might not be a Node.js project
1
commits
fix: rename mcp/ to mcp_servers/ to resolve namespace collision (#3)
ff170d2View on GitHubAdd MiniAgent-494M (Qwen2.5 multilingual base), update README with all 3 models
ad414f9View on GitHubAdd continuous learning, RAG, blog training data, Qwen2.5-0.5B support
aa9331bView on GitHubUpdate README with real trained models, training results, and honest quality assessment
e4ac279View on GitHubFix combined mode OOM: reduce batch_size to 8 and max_length to 320 for 104M model on 20GB
2f1198cView on GitHubAdd combined training mode: minimind base + advertising pretrain + SFT
1ca8180View on GitHubAdd practitioner expertise from skills, hub, and agent specs to training data
110e360View on GitHubAdd 60+ platform training data: Search, Social, DSP, Retail, CTV, Mobile, Native, Analytics
fcf6a39View on GitHubAdd GGUF converter, update README with HuggingFace model link
a20c197View on GitHubFix SFT OOM: reduce batch_size to 8 and max_length to 512 for 20GB GPUs
d8c4476View on GitHubMove import math to top of pretrain.py (was only inside main())
e6f978bView on GitHubRewrite GQAttention.forward with consistent dimension ordering
9e2b0c3View on GitHubFix RoPE apply_rope: index by seq_len (dim 2) not n_heads (dim 1)
098b876View on GitHubFix device detection: send GPU info to stderr, only return device string
d5b5c1aView on GitHubFix training pipeline: real tokenizer, expanded data, one-command GPU training
875fb21View on GitHub