Introduction

Overview of the LegalEase cloud-native architecture and capabilities.

LegalEase is a cloud-native workspace for legal teams to organise case material, process documents, transcribe audio/video, and run AI-powered search. Built on Firebase and Google Cloud, it provides scalable document management with intelligent AI features powered by frontier models.

What Ships Today

  • Case-centric document intake - Upload files into cases, track processing status, and browse results in the built-in PDF viewer with bounding box highlights.
  • AI transcription - Ingest audio/video and receive full transcripts with speaker diarization, timestamps, and inferred speaker names using Gemini 2.5 Flash or Google Speech-to-Text (Chirp 3).
  • Intelligent summarization - Generate executive summaries, key moments, action items, and entity extraction using Gemini.
  • Hybrid search - Combine semantic and keyword search across documents and transcripts using Qdrant vector database.
  • Real-time collaboration - Firebase-powered real-time updates across all connected clients.

Architecture at a Glance

LayerPurpose
Nuxt 4 DashboardFrontend experience, search UI, transcript viewer
Firebase Cloud FunctionsServerless backend with Genkit AI flows
Cloud FirestoreReal-time document database
Firebase StorageFile storage for uploads
Firebase AuthUser authentication (Google, email/password)
Qdrant CloudVector search for document chunks and transcript segments
Gemini 2.5 FlashTranscription, summarization, embeddings
Google Speech-to-TextOptional Chirp 3 provider for production transcription

Design Principles

Cloud-Native, Not Cloud-Locked

While LegalEase currently runs on Firebase and Google Cloud, the architecture is designed with portability in mind:

  • Provider abstraction - AI providers (Gemini, Chirp) are pluggable; adding OpenAI, Anthropic, or local models requires minimal code changes
  • Kubernetes roadmap - Helm charts for self-hosted Kubernetes deployments are planned
  • AWS alternatives - Future support for S3, DynamoDB, and Lambda equivalents

Frontier Model Quality

We prioritize AI quality over infrastructure complexity:

  • Gemini 2.5 Flash provides state-of-the-art transcription with speaker name inference
  • Structured output schemas ensure reliable, parseable responses
  • Multi-modal capabilities ready for future document analysis features

Simple Local Development

Get started with minimal setup:

  • Firebase emulators handle Auth, Firestore, Storage, and Functions locally
  • Only requires a Gemini API key for AI features
  • Single command (mise run dev:local) starts the full stack

Status of Features

FeatureStatus
Case managementProduction ready
Document upload & processingProduction ready
Transcription (Gemini)Production ready
Transcription (Chirp 3)Production ready
SummarizationProduction ready
Vector search (Qdrant)Production ready
Waveform audio playerIn progress
Export (DOCX, SRT, VTT)In progress
Multi-agent workflowsPlanned

Next up: follow the Installation Guide to set up your development environment.

Built with Nuxt UI • LegalEase AI © 2025