our tech & tools
here's the toolkit that powers our ai-driven workflows—from data ingestion and realtime streams to seamless llm ops—complete with links to get you started in seconds.
primary technologies
- python (python.org) serves as our backbone for scripting, data processing, and glue code across services
- apache kafka (kafka.apache.org) powers our high-throughput event streams and decoupled pipelines
- postgresql (postgresql.org/docs) provides reliable relational storage and time-series snapshots for analytics
- supabase (supabase.com/docs) offers instant postgres APIs and real‑time subscriptions for our frontend
- next.js (nextjs.org) delivers our react‑based ui with server‑side rendering and static exports
ai toolkit
- langchain (langchain.com) orchestrates llm calls into pipelines, agents, and chains
- openai (openai.com) supplies the core llms behind chat, analysis, and embeddings
- gemini (gemini.google.com) adds multimodal ai capabilities from google, from chat to vision tasks
- langfuse (langfuse.com) delivers llm observability, prompt versioning, and evals in one platform
- litellm (see oss-llmops-stack) acts as our unified llm proxy with model routing and failover
- langgraph (built on the langchain blog's langgraph platform) powers agent deployment at scale
- llmops (open-source reference at oss-llmops-stack.com) stitches everything into a cohesive, self‑hosted stack
open-source llmops stack
for end-to-end llm operations, we rely on the open source llmops stack—a ready-made combo of litellm (llm gateway) and langfuse (observability + prompt management), backed by vibrant oss communities
ready to dive in? follow the guides on oss-llmops-stack.com to deploy in minutes and take full control of your llm pipelines 🚀
Want to collaborate or have a question? Contact me.