prokube: A Kubernetes based MLOps and LLMOps platform with Kubeflow at its heart and support for cutting edge open source tools for developing and hosting ML and LLM applications.
MLOps
prokube
prokube
From experiment tracking to model serving, from standard ML to multimodel Foundation models – the MLOps stack is complex and maintenance-intensive.
prokube bundles, manages and configures the best of breed open source tools to manage this, so you don’t have to.

prokube
We offer
scalable
fast
prokube integrates and configures best-of-breed open source tools in a Kubernetes-based MLOps/LLMOps platform.
We offer:
- Kubeflow as a core component with numerous improvements and sane defaults
- Integrated MLFLow for experiment tracking and as model registry
- Gen-AI-support with tools for finetuning, serving, prompt management
- Complete CI/CD integration with your existing repositories and registries
- Automated deployment with GitOps
- 100% open source technology stack that you can maintain independently if needed
Discover how prokube can transform
your ML workflow!
&
Services
Deployment
&
We automate your deployment with GitOps workflows and infrastructure as code. Our setup handles complex requirements including air-gapped environments, custom networking, and specialized hardware.
Customization
&
Maintenance
&
We manage your entire MLOps stack. From bug fixes and tutorials to version updates and performance tuning – we keep your platform running smoothly so you don’t have to.
Support
&
MLOps
&
We built prokube because we felt the pain of complex ML infrastructure ourselves. Having battle-tested various MLOps setups, we help you implement efficient workflows and avoid common pitfalls.
AI Consulting
&
How it works
WORKS
WORKS
WORKS
WORKS
Unlock the Full Potential of
Your ML Projects with prokube!
Our Product
prokube
prokube
A complete, integrated MLOps platform with dedicated features for Gen-AI and LLMs.
prokube
AI Workbench
Core MLOps Platform
- Integrated development environments (Jupyter, VS Code)
- Workflow Orchestration with Kubeflow Pipelines
- Multi-GPU/Multi-Node training with Training Operator
- Distributed data processing with Dask clusters
- Dynamic resource allocation and sharing
- MLflow for experiment tracking and model registry
- Preconfigured KServe for fast and scalable model serving
Gen-AI & LLM Capabilities
- Latest Kubeflow Trainer and Katib integration for efficient fine-tuning
- VLLM-powered model serving via KServe
- LiteLLM gateway for unified model access and cost optimization
- Langfuse for comprehensive tracing, monitoring and performance analytics
- Kagent for MCP ready AI agents
- Vector stores (Milvus) for semantic search
- DeepEval for systematic model evaluation
Data Storage & Streaming
Object Storage
MinIO for S3-compatible, high-performance object storage
Block Storage
OpenEBS and Mayastor for Kubernetes Persistent Volumes
Streaming
Kafka integration for real-time data flow and processing
Databases
PostgreSQL for structured data, MongoDB for unstructured data, Qdrant for vector embeddings
IAM & Enterprise Integration
Identity Management
Keycloak for secure, scalable access control
Authentication
Dex identity service with LDAP & Active Directory support
Single-Sign-On
Comprehensive SSO capabilities across the platform
User Federation
User management with role-based access control
Infrastructure as Code
Reproducible Setup
Ansible playbooks for consistent environment provisioning
GitOps Automation
Argo CD for declarative, Git-based infrastructure management
Custom Environments
Support for specialized hardware configurations including GPUs
Repositories, Registries, & CI/CD
Version Control
Integration with your existing Git repositories (optional GitLab CE included)
Container Registry
Support for your existing container registries
Pipeline Integration
Seamless connections to existing CI/CD systems
GitOps Workflows
Optional integration with your existing Argo CD setup for GitOps automation
Monitoring & Logging
Visualization
Grafana dashboards for interactive data visualization
Metrics
Prometheus for comprehensive system and application monitoring
Logging
Loki for centralized log aggregation and analysis
Alerting
Configurable notification system for critical events
About us
prokube
prokube
Born from a research venture with the Helmut Schmidt University, prokube stands today as an innovative spin-off of JUST ADD AI. Our platform solves the real-world ML deployment challenges we’ve experienced ourselves. We don’t just sell our platform—we use it daily at JUST ADD AI, ensuring each component is rigorously validated in practical applications.

Dr. Christian Geier
With a PhD in Physics, his background includes maintaining distributed computing clusters and contributing to open-source projects, ensuring prokube is engineered for performance and scalability.

Henrik Steude
Henrik, our ML expert, is completing his PhD in ML for cyber-physical systems. His data science career includes AI development for the International Space Station, where he gained extensive experience with Kubeflow.

Martin Creutzenberg
Martin Creutzenberg is our Kubernetes and software engineering expert. His deep dive into MLOps tools and architectures ensures that prokube is not just cutting-edge but also user-friendly, allowing seamless integration into any production environment.
prokube