Introduction
Overview of the LegalEase cloud-native architecture and capabilities.
LegalEase is a cloud-native workspace for legal teams to organise case material, process documents, transcribe audio/video, and run AI-powered search. Built on Firebase and Google Cloud, it provides scalable document management with intelligent AI features powered by frontier models.

What Ships Today
- Case-centric document intake - Upload files into cases, track processing status, and browse results in the built-in PDF viewer with bounding box highlights.
- AI transcription - Ingest audio/video and receive full transcripts with speaker diarization, timestamps, and inferred speaker names using Gemini 2.5 Flash or Google Speech-to-Text (Chirp 3).
- Intelligent summarization - Generate executive summaries, key moments, action items, and entity extraction using Gemini.
- Hybrid search - Combine semantic and keyword search across documents and transcripts using Qdrant vector database.
- Real-time collaboration - Firebase-powered real-time updates across all connected clients.
Architecture at a Glance
| Layer | Purpose |
|---|---|
| Nuxt 4 Dashboard | Frontend experience, search UI, transcript viewer |
| Firebase Cloud Functions | Serverless backend with Genkit AI flows |
| Cloud Firestore | Real-time document database |
| Firebase Storage | File storage for uploads |
| Firebase Auth | User authentication (Google, email/password) |
| Qdrant Cloud | Vector search for document chunks and transcript segments |
| Gemini 2.5 Flash | Transcription, summarization, embeddings |
| Google Speech-to-Text | Optional Chirp 3 provider for production transcription |
Design Principles
Cloud-Native, Not Cloud-Locked
While LegalEase currently runs on Firebase and Google Cloud, the architecture is designed with portability in mind:
- Provider abstraction - AI providers (Gemini, Chirp) are pluggable; adding OpenAI, Anthropic, or local models requires minimal code changes
- Kubernetes roadmap - Helm charts for self-hosted Kubernetes deployments are planned
- AWS alternatives - Future support for S3, DynamoDB, and Lambda equivalents
Frontier Model Quality
We prioritize AI quality over infrastructure complexity:
- Gemini 2.5 Flash provides state-of-the-art transcription with speaker name inference
- Structured output schemas ensure reliable, parseable responses
- Multi-modal capabilities ready for future document analysis features
Simple Local Development
Get started with minimal setup:
- Firebase emulators handle Auth, Firestore, Storage, and Functions locally
- Only requires a Gemini API key for AI features
- Single command (
mise run dev:local) starts the full stack
Status of Features
| Feature | Status |
|---|---|
| Case management | Production ready |
| Document upload & processing | Production ready |
| Transcription (Gemini) | Production ready |
| Transcription (Chirp 3) | Production ready |
| Summarization | Production ready |
| Vector search (Qdrant) | Production ready |
| Waveform audio player | In progress |
| Export (DOCX, SRT, VTT) | In progress |
| Multi-agent workflows | Planned |
Next up: follow the Installation Guide to set up your development environment.