Now in early access — join the waitlist

Deploy your ML app
in 60 seconds.

No Docker. No AWS. No DevOps headaches.
For data scientists, ML engineers, and any Python developer who just wants to ship.

mlship deploy
$ mlship deploy my_fraud_detector/
────────────────────────────────────
Detecting environment... Python 3.11 ✓
Installing dependencies... done ✓
Building container... ready ✓
Deploying to edge... live ✓
────────────────────────────────────
✓ Live at: fraud-detector.mlship.dev
✓ You're on the list. We'll be in touch soon.

Free forever plan · No credit card · Ships in weeks

// the problem

You build in hours.
Deployment takes days.

01 /
🐳

Docker is a nightmare

You're a data scientist, not a DevOps engineer. Containerization, environment configs, port mappings — all to just share your model.

02 /
☁️

AWS is overkill

IAM roles, EC2 instances, load balancers — you just want a live URL. Instead you lose 3 days in AWS documentation.

03 /
🔗

GitHub links don't work

Most ML builders give up and share a repo nobody can run. Your brilliant model never reaches the people who need it.

04 /
😵

HuggingFace breaks endpoints

You deploy to HuggingFace Spaces and your connection link just doesn't appear. No clear URL, no clear endpoint. Pure guesswork.

// early feedback

Real people. Real pain.
Real validation.

💬
Developer
Discord — Tech Community
Discord
"I used HuggingFace to deploy my MCP server but it didn't give me the connection link. I had no idea how to share it with anyone."
What we built from this:
✓ Clear connection endpoints ✓ GitHub auto-deploy ✓ Python developers added as audience
📱
6 ML Builders
IIT WhatsApp Group — India
WhatsApp
"Signed up immediately after seeing the landing page. Asked questions about GPU support, community templates, and student use cases."
What we built from this:
✓ Community templates added ✓ CPU/GPU selection ✓ Students & hackathon builders targeted
// the solutions

Everything a Python builder needs.
In one place.

01
🚀 Launching Soon

One-Click Deployment

You built a brilliant ML app in Streamlit or Gradio. Now what? Most data scientists spend 3 days fighting Docker, AWS, and servers just to share it. With MLShip, follow 3 simple steps and get a live URL in 60 seconds. No DevOps knowledge required. Ever.

📁
01
Upload
⚙️
02
Configure
🚀
03
Deploy
  • ✓ Auto-reads requirements.txt, pyproject.toml & environment.yml
  • ✓ Clear live URL & connection endpoint — no guessing ← Discord
  • ✓ Real-time deployment logs & build status ← LinkedIn
  • ✓ Community templates — deploy a starter app instantly ← WhatsApp + Kaggle
  • ✓ CPU/GPU selection before deploying ← WhatsApp + Kaggle
  • ✓ Clear error messages in plain English ← Kaggle + LinkedIn
  • ✓ Works with Streamlit, Gradio & any Python app
mlship deploy
$ mlship deploy ./fraud_detector
──────────────────────────────
Reading requirements.txt...
Installing packages...
Deploying to edge...
──────────────────────────────
✓ Live: fraud-detector.mlship.dev
✓ Endpoint: api.mlship.dev/fraud-detector
02
🔗 Launching Soon

GitHub Native Deployment

Connect your GitHub repo and deploy instantly. Every git push triggers an automatic redeploy — your app is always live and always up to date. Branch deployments let your team test on dev before shipping to production. Every commit is linked to your model version and experiment results.

  • ✓ Connect repo → select branch → deploy in seconds ← Discord feedback
  • ✓ Push to GitHub → MLShip auto-redeploys instantly ← Discord feedback
  • ✓ main → live production · dev → private testing
  • ✓ Every commit linked to model version automatically
  • ✓ No manual redeployment ever again
github auto-deploy
$ git push origin main
──────────────────────────────
MLShip detected push...
Rebuilding app...
Redeploying to edge...
──────────────────────────────
✓ main: app.mlship.dev (production)
✓ dev:   dev.mlship.dev (testing)
03
🤖 Coming Next

AI Support Widget

You deploy your ML app and users flood in. Then the questions start — "Why did the model give this result?", "How do I upload my data?", "What does this score mean?" MLShip's AI widget answers ML-specific questions 24/7, automatically, trained on your app's own context.

  • ✓ Add in 5 lines — pip install mlsistant
  • ✓ AI trained on your app's specific context
  • ✓ Escalates to you when it can't answer
  • ✓ Dashboard shows what users ask most
support widget
import mlsistant
──────────────────────────────
mlsistant.init(
  app_key="your_key",
  context="cancer detection"
)
──────────────────────────────
✓ AI support active — 24/7
04
📊 On the Horizon

Experiment Tracker + GitHub Commit Logging

You run 50 model experiments over 2 weeks. Which performed best? MLShip auto-logs every training run and links each GitHub commit to model metrics and deployment version automatically. Compare results side by side and deploy your best model in one click. Nobody else connects GitHub commits to experiment results — this is unique to MLShip.

  • ✓ Auto-logs every training run
  • ✓ Every GitHub commit linked to model metrics
  • ✓ Visual side-by-side comparison
  • ✓ One-click deploy your best model
  • ✓ Simpler than MLflow, cheaper than W&B
experiment tracker
commit a3f9c2 ← best
──────────────────────────────
accuracy  0.97
loss      0.03
epochs    120
lr        0.001
──────────────────────────────
→ Deploy run #47? [y/n]
// the roadmap

Built to grow with you.

Phase 01

One-Click Deployment

Upload your Streamlit or Gradio project. MLShip auto-reads requirements.txt, pyproject.toml, and environment.yml — installs everything and gives you a live URL with a clear connection endpoint in 60 seconds. Community templates let you deploy a starter app instantly. CPU/GPU selection built in. Clear error messages in plain English when anything fails.

💬 Discord → clear endpoints 📱 WhatsApp → CPU/GPU + templates 🏆 Kaggle → error messages
Launching soon
Phase 02

GitHub Native Deployment + Auto Redeploy

Connect your GitHub repo and deploy instantly. Push to main → production redeploys automatically. Push to dev → private testing environment. Every commit is linked to your model version and deployment history.

💬 Discord → "auto-redeploy on git push"
Launching soon
Phase 03

AI Support Widget

Add a smart support layer to your deployed app in 5 lines of Python. The AI answers user questions 24/7 — why did the model predict this, how do I upload data, what does this output mean — and escalates only when it can't handle it.

Coming next
Phase 04

Experiment Tracker + GitHub Commit Logging

Auto-log every model run. Every GitHub commit links to model metrics and deployment version automatically. Compare results side by side. Deploy your best model in one click. Simpler than MLflow, cheaper than W&B — built for indie builders.

On the horizon
Phase 05

API + Dashboard

Full REST API to manage all your deployments programmatically. Beautiful unified dashboard showing every app, every version, every metric in one place. Team collaboration features, usage analytics per app, and role-based access control.

Coming later
Phase 06

Monitoring + Alerting

Uptime alerts the moment your app goes down. Error rate tracking, response time monitoring, traffic analytics, and model performance metrics over time. Know before your users do when something breaks.

Coming later
Phase 07

Community Showcase

A public gallery of every ML app deployed on MLShip. Your model reaches the world. Other builders discover your work. Students showcase their projects. Researchers share demos. MLShip becomes the place ML apps live — and the place people go to find them.

🎨 LinkedIn → "project showcase creates community layer"
The growth engine

Stop fighting DevOps.
Start shipping models.

Join data scientists, ML engineers, and Python developers who are done wasting days on deployment and want to get back to building.

✓ You're on the list. We'll be in touch soon.
60s
deploy time
0
devops knowledge needed
6x
phases of growth