Flashgpt

Flashgpt project preview

Generates flashcards for faster learning.

Project

Enhancing learning outcomes

Flashgpt

Flashgpt project background

Flashcard Generator Powered by GPT

Overview

Generates study flashcards from arbitrary text input using an LLM backend, with RAG to ground cards in user-supplied material.

Technical Challenges

RAG Pipeline

User documents are chunked with a sliding window (with overlap to preserve sentence context), embedded via OpenAI embeddings, and stored in a vector index. At generation time, the top-k chunks are retrieved and injected into the prompt as grounding context, reducing hallucination rates for domain-specific content.

Structured Output & Prompt Engineering

GPT doesn't reliably return well-formed flashcard arrays without explicit guidance. The prompt enforces a strict JSON schema (question/answer pairs with optional hints), and responses are validated with Zod before being persisted malformed responses trigger a retry with an error-correction prompt.

Authentication & Data Isolation

Each user's card decks are scoped by session, with server-side auth checks on every API route to prevent cross-user data access.

Latency

Generation for large documents can exceed typical request timeouts. Addressed by streaming the LLM response and flushing cards to the client incrementally as each question/answer pair is parsed.

System Architecture

React Flow mini map
Frontend / ClientBackend ServicesExternal APIs / AIDatabases / StorageInfrastructure

RAG (Retrieval-Augmented Generation) pipeline. Documents are chunked, embedded into a vector store, and the most relevant chunks are retrieved at query time to ground the LLM's flashcard generation — reducing hallucination.

Made with

LLMNextJStailwindcssPrompt EngineeringAuthenticationRAG