coturn turn and stun server
Stack file with turn and stun server
Stack file with turn and stun server
Comparing Yas and Maish for cooperate internet
Choosing the right datacenter server for AI workloads requires balancing performance, GPU compatibility, total cost of ownership, and vendor lifecycle management. This guide compares enterprise servers from the three leading vendors—Lenovo, Dell, and HPE—with a focus on AI/ML capabilities.
Scaling PostgreSQL to handle 1 million concurrent clients is impossible directly, but with proper architecture, pooling, and load balancing, it can be achieved safely.
This article explains a practical setup using:
Running production LLM inference requires careful VRAM planning. Total VRAM requirements consist of model weights, KV cache that grows with concurrent users and context length, and system overhead.
This guide explains our GitOps-based deployment workflow using Docker Swarm, Portainer, and GitHub Actions. The workflow automatically builds, pushes, and deploys applications across multiple environments (dev, prod, infra, docs, obsv) using infrastructure as code principles.
We recently had a challenge to take EVE (Our flagship Chatbot product) offline i.e to work via normal sms chats. This post takes you through our research journey.
We've recently upgraded our CI/CD pipeline to improve build times, deploy time and reliability.