Penpal

Penpal: The Write Spot - Brand System & Hybrid Platform Architecture

Introduction

This guide documents the complete brand system and technical architecture for Penpal: The Write Spot, a welcoming yet professional AI writing companion. It brings together actionable branding guidance with production-ready implementation patterns spanning the Python MCP Server (Model Context Protocol), Next.js 15 web application, and Tauri 2.0 desktop distribution. Every recommendation follows 2025 standards, official documentation, and proven production practices.


Brand Identity Guidelines

Brand Promise & Personality

Voice & Tone Principles

Microcopy Patterns

Scenario Voice Example Notes
Welcome modal “Welcome back, Alex. Ready to pick up where we left off?” Personalize when context is available.
AI suggestion “Want a version with more suspense?” Offer choices, avoid commands.
Error state “Something snagged on our end. Let’s try that again in a few seconds.” Apologize succinctly, give next step.
Success state “Saved! I’ll keep everything safe while you keep the ideas flowing.” Reassure permanence, reinforce partnership.

Visual Identity System

Color Palette

Token Hex Role Usage Guidance Accessible Pairings
--color-ink-blue #2563EB Primary actions, interactive elements Base buttons, focused states, links On --color-soft-cream or white (AA large)
--color-warm-coral #F97316 Accent, highlights, creative cues Progress indicators, empty states, badges On --color-soft-cream; for text, pair with --color-ink-blue
--color-soft-cream #FFFBF5 Background, paper texture foundation Page canvas, cards, modals Use --color-ink-blue or --color-deep-plum for text
--color-deep-plum #7C3AED Premium features, pro-tier UI Pricing badges, CTA upgrades, premium states On --color-soft-cream or white (AA normal)
--color-ink-charcoal #1F2933 Primary text Body copy, headers On --color-soft-cream or white
--color-slate-mist #CBD5F5 Neutrals for borders/dividers Panels, secondary buttons, table lines Ensure overlay text meets AA

Color Tokens (CSS Custom Properties)

:root {
  --color-ink-blue: #2563EB;
  --color-warm-coral: #F97316;
  --color-soft-cream: #FFFBF5;
  --color-deep-plum: #7C3AED;
  --color-ink-charcoal: #1F2933;
  --color-slate-mist: #CBD5F5;
  --color-success: #22C55E;
  --color-error: #EF4444;
}

[data-theme="dark"] {
  --color-soft-cream: #0F172A;
  --color-ink-charcoal: #F8FAFC;
  --color-slate-mist: #1E293B;
}

Typography

Typographic Scale

Token Usage Desktop Size Mobile Size Font Family Notes
--font-display Hero headline 48px / 56px leading 36px / 44px Merriweather Bold Keep width <= 18ch
--font-h1 Section headings 36px / 44px 28px / 36px Inter SemiBold Use sentence case
--font-h2 Subheadings 28px / 36px 24px / 32px Inter Medium  
--font-body Body copy 18px / 28px 17px / 26px Merriweather Regular Default reading size
--font-ui Buttons, inputs 16px / 24px 16px / 24px Inter Medium Title case
--font-sm Microcopy 14px / 20px 14px / 20px Inter Regular Maintain 4.5:1 contrast

Logo System

Imagery & Illustration Direction

UI Design Principles

Component Styling Cheatsheet

Motion & Interaction

Accessibility Standards

Sound & Haptics (Optional Enhancements)

Implementation References


1. MCP Server Architecture (Python - 2025 Standards)

Overview

Model Context Protocol (MCP) by Anthropic is the standard for connecting AI applications to external tools and data sources. As of 2025, the Python SDK is production-ready at version 1.15.0+.

Official Resources

# Using UV package manager (10-100x faster than pip)
uv add "mcp[cli]"

FastMCP: Modern Implementation Pattern

FastMCP provides a decorator-based, Pythonic API with full async support:

from mcp.server.fastmcp import FastMCP, Context
from pydantic import BaseModel, Field
import httpx

# Create server instance
mcp = FastMCP("Content Writing Platform")

# Structured output with Pydantic
class ContentSuggestion(BaseModel):
    title: str = Field(description="Suggested title")
    outline: list[str] = Field(description="Content outline")
    word_count: int

# Async tool with structured output
@mcp.tool()
async def generate_outline(topic: str, content_type: str, ctx: Context) -> ContentSuggestion:
    """Generate content outline using LLM"""
    await ctx.info(f"Generating outline for: {topic}")
    
    # Call LLM via your multi-provider client
    result = await call_llm_api(topic, content_type)
    
    await ctx.report_progress(1.0, 1.0)
    return ContentSuggestion(
        title=result['title'],
        outline=result['outline'],
        word_count=result['estimated_words']
    )

# Database integration with lifespan
from contextlib import asynccontextmanager
from dataclasses import dataclass
import asyncpg

@dataclass
class AppContext:
    db_pool: asyncpg.Pool

@asynccontextmanager
async def app_lifespan(server: FastMCP):
    db_pool = await asyncpg.create_pool(
        host="localhost",
        database="content_db",
        user="user",
        password="password"
    )
    try:
        yield AppContext(db_pool=db_pool)
    finally:
        await db_pool.close()

mcp = FastMCP("Content Server", lifespan=app_lifespan)

@mcp.tool()
async def save_document(title: str, content: str, ctx: Context) -> dict:
    """Save document to database"""
    pool = ctx.request_context.lifespan_context.db_pool
    
    async with pool.acquire() as conn:
        doc_id = await conn.fetchval(
            "INSERT INTO documents (title, content) VALUES ($1, $2) RETURNING id",
            title, content
        )
    
    await ctx.info(f"Saved document ID: {doc_id}")
    return {"id": doc_id, "success": True}

if __name__ == "__main__":
    # For local development with Claude Desktop
    mcp.run(transport="stdio")
    
    # For production with HTTP access
    # mcp.run(transport="streamable-http", port=8000)

Transport Mechanisms

Transport Use Case Status (2025)
stdio Claude Desktop, local development Standard
Streamable HTTP Production, web services Standard (recommended)
SSE Legacy systems Deprecated

Production HTTP Server Setup

from mcp.server.fastmcp import FastMCP

# Stateless HTTP server for production
mcp = FastMCP("Content API", stateless_http=True)

@mcp.tool()
async def export_document(doc_id: str, format: str) -> dict:
    """Export document to PDF, DOCX, etc."""
    # Implementation using export libraries
    return {"download_url": result_url}

if __name__ == "__main__":
    mcp.run(transport="streamable-http", port=8000)
    # Server runs on http://localhost:8000/mcp

Key Best Practices

  1. Use FastMCP for high-level API (production-ready)
  2. Structured Output with Pydantic models for type safety
  3. Async/Await for all I/O operations
  4. Context API for logging, progress, user interaction
  5. Lifespan Management for database connections
  6. Streamable HTTP for production deployments

2. Next.js 15 + Tauri Integration

Next.js 15 Key Features

Tauri 2.0 Advantages

Integration Setup

Step 1: Create Next.js Project

npx create-next-app@latest content-platform --typescript --app --tailwind
cd content-platform

Step 2: Configure for Static Export (Required for Tauri)

// next.config.mjs
const isProd = process.env.NODE_ENV === 'production';
const internalHost = process.env.TAURI_DEV_HOST || 'localhost';

/** @type {import('next').NextConfig} */
const nextConfig = {
  // REQUIRED: Use SSG instead of SSR
  output: 'export',
  
  // REQUIRED: Disable image optimization for static export
  images: {
    unoptimized: true,
  },
  
  // Configure asset prefix for dev
  assetPrefix: isProd ? undefined : `http://${internalHost}:3000`,
};

export default nextConfig;

Step 3: Initialize Tauri

npm install @tauri-apps/cli --save-dev
npm install @tauri-apps/api --save
npm run tauri init

During initialization:

Step 4: Update package.json

{
  "scripts": {
    "dev": "next dev",
    "build": "next build",
    "tauri": "tauri",
    "tauri:dev": "tauri dev",
    "tauri:build": "tauri build"
  }
}

Communication: Tauri ↔ Next.js

Tauri Commands (Rust Backend)

// src-tauri/src/main.rs
#[tauri::command]
async fn process_document(title: String, content: String) -> Result<String, String> {
    // Call Python MCP server or process directly
    match save_to_database(title, content).await {
        Ok(id) => Ok(format!("Document saved: {}", id)),
        Err(e) => Err(format!("Error: {}", e))
    }
}

#[tauri::command]
async fn export_to_pdf(doc_id: String) -> Result<Vec<u8>, String> {
    // Generate PDF and return bytes
    Ok(pdf_bytes)
}

fn main() {
    tauri::Builder::default()
        .invoke_handler(tauri::generate_handler![
            process_document,
            export_to_pdf
        ])
        .run(tauri::generate_context!())
        .expect("error while running application");
}

Next.js Frontend (Client Component)

'use client'
import { invoke } from '@tauri-apps/api/tauri';
import { useEffect, useState } from 'react';

export default function DocumentEditor() {
  const [content, setContent] = useState('');
  
  const saveDocument = async () => {
    try {
      const result = await invoke<string>('process_document', {
        title: 'My Document',
        content: content
      });
      console.log(result);
    } catch (error) {
      console.error('Failed to save:', error);
    }
  };
  
  const exportPDF = async () => {
    const pdfBytes = await invoke<number[]>('export_to_pdf', {
      docId: '123'
    });
    // Handle PDF download
  };
  
  return (
    <div>
      <textarea 
        value={content} 
        onChange={(e) => setContent(e.target.value)}
        className="w-full h-96 p-4"
      />
      <button onClick={saveDocument}>Save</button>
      <button onClick={exportPDF}>Export PDF</button>
    </div>
  );
}

Important: Tauri APIs require client-side execution. Always use 'use client' directive.

kvnxiao/tauri-nextjs-template (GitHub)


3. Rich Text Editor Selection

Comprehensive Comparison

Editor Bundle Size Features Maintenance Recommendation
TipTap ~120KB 5/5 Very active Primary choice
Lexical 150-200KB 5/5 Very active Alternative
Novel 150KB+ 4/5 Active Notion-style UI
Slate ~150KB 4/5 Active Custom builds
Quill ~180KB 4/5 Active Simple use cases

Primary Recommendation: TipTap

Why TipTap?

Installation

npm install @tiptap/react @tiptap/starter-kit @tiptap/extension-image @tiptap/extension-table @tiptap/extension-table-row @tiptap/extension-table-cell @tiptap/extension-code-block-lowlight

Next.js 15 Implementation

// app/components/Editor.tsx
'use client'
import { useEditor, EditorContent } from '@tiptap/react'
import StarterKit from '@tiptap/starter-kit'
import Image from '@tiptap/extension-image'
import Table from '@tiptap/extension-table'
import TableRow from '@tiptap/extension-table-row'
import TableCell from '@tiptap/extension-table-cell'
import TableHeader from '@tiptap/extension-table-header'
import CodeBlockLowlight from '@tiptap/extension-code-block-lowlight'
import { common, createLowlight } from 'lowlight'

const lowlight = createLowlight(common)

export function Editor({ content, onChange }) {
  const editor = useEditor({
    extensions: [
      StarterKit.configure({
        codeBlock: false,
      }),
      Image.configure({
        inline: true,
        allowBase64: true,
      }),
      Table.configure({
        resizable: true,
      }),
      TableRow,
      TableCell,
      TableHeader,
      CodeBlockLowlight.configure({
        lowlight,
      }),
    ],
    content,
    onUpdate: ({ editor }) => {
      onChange(editor.getHTML())
    },
    immediatelyRender: false, // Important for Next.js SSR
  })

  if (!editor) {
    return null
  }

  return (
    <div className="border rounded-lg">
      <Toolbar editor={editor} />
      <EditorContent editor={editor} className="prose max-w-none p-4" />
    </div>
  )
}

Export Capabilities

Alternative: Lexical (Meta)

For Meta ecosystem integration or bleeding-edge features:

'use client'
import { LexicalComposer } from '@lexical/react/LexicalComposer'
import { RichTextPlugin } from '@lexical/react/LexicalRichTextPlugin'
import { ContentEditable } from '@lexical/react/LexicalContentEditable'
import LexicalErrorBoundary from '@lexical/react/LexicalErrorBoundary'

export function LexicalEditor() {
  const initialConfig = {
    namespace: 'ContentEditor',
    theme: {},
    onError: (error) => console.error(error)
  }

  return (
    <LexicalComposer initialConfig={initialConfig}>
      <RichTextPlugin
        contentEditable={<ContentEditable className="editor-content" />}
        placeholder={<div>Start writing...</div>}
        ErrorBoundary={LexicalErrorBoundary}
      />
    </LexicalComposer>
  )
}

4. Database Architecture: SQLAlchemy 2.0 + SQLite

Modern Schema Design

from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column, relationship
from sqlalchemy import String, Text, DateTime, Integer, ForeignKey, JSON, Index
from datetime import datetime
from typing import Optional, List

class Base(DeclarativeBase):
    pass

class Document(Base):
    __tablename__ = "documents"
    __table_args__ = (
        Index('ix_documents_created_at', 'created_at'),
        Index('ix_documents_title', 'title'),
    )
    
    id: Mapped[int] = mapped_column(primary_key=True)
    uuid: Mapped[str] = mapped_column(String(36), unique=True)
    title: Mapped[str] = mapped_column(String(500))
    content: Mapped[str] = mapped_column(Text)
    content_type: Mapped[str] = mapped_column(String(50), default="markdown")
    
    # Timestamps
    created_at: Mapped[datetime] = mapped_column(default=datetime.utcnow)
    updated_at: Mapped[datetime] = mapped_column(
        default=datetime.utcnow,
        onupdate=datetime.utcnow
    )
    
    # Soft delete
    deleted_at: Mapped[Optional[datetime]] = mapped_column(default=None)
    is_deleted: Mapped[bool] = mapped_column(default=False)
    
    # Metadata
    metadata_: Mapped[Optional[dict]] = mapped_column(JSON, name="metadata")
    
    # Folder organization
    folder_id: Mapped[Optional[int]] = mapped_column(ForeignKey("folders.id"))
    folder: Mapped[Optional["Folder"]] = relationship(back_populates="documents")
    
    # Version control
    versions: Mapped[List["DocumentVersion"]] = relationship(
        back_populates="document",
        cascade="all, delete-orphan",
        order_by="DocumentVersion.version_number.desc()"
    )

class DocumentVersion(Base):
    __tablename__ = "document_versions"
    __table_args__ = (
        Index('ix_doc_versions_document_id', 'document_id'),
        Index('ix_doc_versions_created_at', 'created_at'),
    )
    
    id: Mapped[int] = mapped_column(primary_key=True)
    document_id: Mapped[int] = mapped_column(ForeignKey("documents.id"))
    version_number: Mapped[int]
    
    # Snapshot
    title: Mapped[str] = mapped_column(String(500))
    content: Mapped[str] = mapped_column(Text)
    
    # Metadata
    created_at: Mapped[datetime] = mapped_column(default=datetime.utcnow)
    change_description: Mapped[Optional[str]] = mapped_column(Text)
    changeset: Mapped[Optional[dict]] = mapped_column(JSON)
    
    document: Mapped["Document"] = relationship(back_populates="versions")

class Folder(Base):
    __tablename__ = "folders"
    
    id: Mapped[int] = mapped_column(primary_key=True)
    name: Mapped[str] = mapped_column(String(255))
    parent_id: Mapped[Optional[int]] = mapped_column(ForeignKey("folders.id"))
    path: Mapped[str] = mapped_column(String(1000))
    
    parent: Mapped[Optional["Folder"]] = relationship(
        remote_side=[id],
        back_populates="children"
    )
    children: Mapped[List["Folder"]] = relationship(back_populates="parent")
    documents: Mapped[List["Document"]] = relationship(back_populates="folder")

Async SQLAlchemy Setup

from sqlalchemy.ext.asyncio import (
    create_async_engine,
    AsyncSession,
    async_sessionmaker,
    AsyncAttrs
)
from sqlalchemy import event

DATABASE_URL = "sqlite+aiosqlite:///./content.db"

engine = create_async_engine(
    DATABASE_URL,
    echo=False,
    connect_args={"check_same_thread": False}
)

async_session_maker = async_sessionmaker(
    engine,
    class_=AsyncSession,
    expire_on_commit=False  # Critical for local-first apps
)

# SQLite optimizations
@event.listens_for(engine.sync_engine, "connect")
def set_sqlite_pragma(dbapi_conn, connection_record):
    cursor = dbapi_conn.cursor()
    cursor.execute("PRAGMA foreign_keys=ON")
    cursor.execute("PRAGMA journal_mode=WAL")  # Write-Ahead Logging
    cursor.execute("PRAGMA synchronous=NORMAL")
    cursor.execute("PRAGMA cache_size=-64000")  # 64MB cache
    cursor.execute("PRAGMA temp_store=MEMORY")
    cursor.close()

CRUD Operations

from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.orm import selectinload

async def create_document(
    session: AsyncSession,
    title: str,
    content: str
) -> Document:
    document = Document(title=title, content=content)
    session.add(document)
    await session.commit()
    await session.refresh(document)
    return document

async def get_document_with_versions(
    session: AsyncSession,
    doc_id: int
) -> Document | None:
    stmt = select(Document).options(
        selectinload(Document.versions)
    ).where(Document.id == doc_id)
    result = await session.execute(stmt)
    return result.scalar_one_or_none()

async def update_document(
    session: AsyncSession,
    doc_id: int,
    title: str = None,
    content: str = None,
    create_version: bool = True
) -> Document | None:
    document = await get_document(session, doc_id)
    if not document:
        return None
    
    # Create version before update
    if create_version:
        version_num = len(document.versions) + 1
        version = DocumentVersion(
            document_id=document.id,
            version_number=version_num,
            title=document.title,
            content=document.content
        )
        session.add(version)
    
    if title:
        document.title = title
    if content:
        document.content = content
    
    await session.commit()
    await session.refresh(document)
    return document

Migration Management with Alembic

pip install alembic
alembic init alembic
alembic revision --autogenerate -m "Initial schema"
alembic upgrade head

5. Document Export Pipeline

Format Library Purpose
PDF WeasyPrint, Playwright HTML to PDF conversion
DOCX python-docx, docxtpl Word document generation
PPTX python-pptx Presentation creation
Markdown Built-in Text export
HTML Jinja2 Template rendering

Multi-Format Exporter

from dataclasses import dataclass
from enum import Enum
from typing import Dict, Any

class OutputFormat(Enum):
    PDF = "pdf"
    DOCX = "docx"
    PPTX = "pptx"
    HTML = "html"
    MARKDOWN = "md"

@dataclass
class DocumentData:
    title: str
    content: str
    metadata: Dict[str, Any]

class MultiFormatExporter:
    def __init__(self, template_dir="templates"):
        self.template_dir = template_dir
        from jinja2 import Environment, FileSystemLoader
        self.jinja_env = Environment(
            loader=FileSystemLoader(template_dir)
        )
    
    def export(self, data: DocumentData, format: OutputFormat, 
               output_path: str):
        exporters = {
            OutputFormat.PDF: self._export_pdf,
            OutputFormat.DOCX: self._export_docx,
            OutputFormat.PPTX: self._export_pptx,
            OutputFormat.HTML: self._export_html,
            OutputFormat.MARKDOWN: self._export_markdown
        }
        
        exporter = exporters.get(format)
        if not exporter:
            raise ValueError(f"Unsupported format: {format}")
        
        return exporter(data, output_path)
    
    def _export_pdf(self, data: DocumentData, output_path: str):
        """Export to PDF using WeasyPrint"""
        from weasyprint import HTML
        
        template = self.jinja_env.get_template('document.html')
        html_content = template.render(
            title=data.title,
            content=data.content,
            **data.metadata
        )
        
        HTML(string=html_content).write_pdf(output_path)
        return output_path
    
    def _export_docx(self, data: DocumentData, output_path: str):
        """Export to DOCX using python-docx"""
        from docx import Document
        
        doc = Document()
        doc.add_heading(data.title, 0)
        doc.add_paragraph(data.content)
        
        doc.core_properties.title = data.title
        doc.core_properties.author = data.metadata.get('author', '')
        
        doc.save(output_path)
        return output_path
    
    def _export_pptx(self, data: DocumentData, output_path: str):
        """Export to PPTX using python-pptx"""
        from pptx import Presentation
        
        prs = Presentation()
        
        # Title slide
        title_slide = prs.slides.add_slide(prs.slide_layouts[0])
        title_slide.shapes.title.text = data.title
        
        # Content slide
        content_slide = prs.slides.add_slide(prs.slide_layouts[1])
        content_slide.shapes.title.text = "Content"
        content_slide.placeholders[1].text = data.content
        
        prs.save(output_path)
        return output_path
    
    def _export_html(self, data: DocumentData, output_path: str):
        template = self.jinja_env.get_template('document.html')
        html_content = template.render(
            title=data.title,
            content=data.content,
            **data.metadata
        )
        
        with open(output_path, 'w', encoding='utf-8') as f:
            f.write(html_content)
        
        return output_path
    
    def _export_markdown(self, data: DocumentData, output_path: str):
        markdown_content = f"""# {data.title}

{data.content}

---
Author: {data.metadata.get('author', 'Unknown')}
Date: {data.metadata.get('date', 'N/A')}
"""
        
        with open(output_path, 'w', encoding='utf-8') as f:
            f.write(markdown_content)
        
        return output_path

MCP Tool for Export

@mcp.tool()
async def export_document(
    doc_id: str, 
    format: str,
    ctx: Context
) -> dict:
    """Export document to specified format"""
    
    exporter = MultiFormatExporter()
    
    # Fetch document from database
    document = await get_document_from_db(doc_id)
    
    data = DocumentData(
        title=document.title,
        content=document.content,
        metadata={"author": "User", "date": str(datetime.now())}
    )
    
    output_path = f"/tmp/{doc_id}.{format}"
    
    await ctx.info(f"Exporting to {format}")
    result = exporter.export(data, OutputFormat(format), output_path)
    await ctx.report_progress(1.0, 1.0)
    
    return {
        "success": True,
        "path": result,
        "format": format
    }

6. Multi-Provider LLM Integration

Primary Recommendation: LiteLLM

Why LiteLLM?

Installation

pip install litellm

Unified Interface

from litellm import completion, acompletion
import os

# Set API keys
os.environ["OPENAI_API_KEY"] = "sk-..."
os.environ["ANTHROPIC_API_KEY"] = "sk-ant-..."
os.environ["XAI_API_KEY"] = "xai-..."
os.environ["GOOGLE_API_KEY"] = "AIza..."

messages = [{"content": "Write an article about AI", "role": "user"}]

# Unified interface across all providers
response = completion(model="openai/gpt-4o", messages=messages)
response = completion(model="anthropic/claude-sonnet-4-5-20250929", messages=messages)
response = completion(model="xai/grok-2-latest", messages=messages)
response = completion(model="vertex_ai/gemini-2.5-flash", messages=messages)

Multi-Provider Manager

from typing import Optional, Dict, Any
from litellm import completion, acompletion
from tenacity import retry, stop_after_attempt, wait_exponential
from enum import Enum
from dataclasses import dataclass

class Provider(Enum):
    OPENAI = "openai"
    ANTHROPIC = "anthropic"
    GOOGLE = "google"
    XAI = "xai"

@dataclass
class ModelConfig:
    provider: Provider
    model_name: str
    max_tokens: int = 1024
    temperature: float = 0.7

class MultiProviderLLM:
    """Unified interface for multiple LLM providers"""
    
    def __init__(self, api_keys: Optional[Dict[str, str]] = None):
        self.api_keys = api_keys or {}
        self._setup_environment()
    
    def _setup_environment(self):
        key_mapping = {
            Provider.OPENAI: "OPENAI_API_KEY",
            Provider.ANTHROPIC: "ANTHROPIC_API_KEY",
            Provider.GOOGLE: "GOOGLE_API_KEY",
            Provider.XAI: "XAI_API_KEY"
        }
        
        for provider, env_var in key_mapping.items():
            if provider.value in self.api_keys:
                os.environ[env_var] = self.api_keys[provider.value]
    
    def _build_model_string(self, config: ModelConfig) -> str:
        provider_models = {
            Provider.OPENAI: f"openai/{config.model_name}",
            Provider.ANTHROPIC: f"anthropic/{config.model_name}",
            Provider.GOOGLE: f"vertex_ai/{config.model_name}",
            Provider.XAI: f"xai/{config.model_name}"
        }
        return provider_models[config.provider]
    
    @retry(
        stop=stop_after_attempt(3),
        wait=wait_exponential(multiplier=1, min=2, max=60)
    )
    async def agenerate(self, config: ModelConfig, messages: list, stream: bool = False, **kwargs):
        model_string = self._build_model_string(config)
        return await acompletion(
            model=model_string,
            messages=messages,
            max_tokens=config.max_tokens,
            temperature=config.temperature,
            stream=stream,
            **kwargs
        )
    
    async def astream_generate(self, config: ModelConfig, messages: list, **kwargs):
        response = await self.agenerate(config, messages, stream=True, **kwargs)
        async for chunk in response:
            if chunk.choices[0].delta.content:
                yield chunk.choices[0].delta.content

MCP Integration

llm = MultiProviderLLM(api_keys={
    "openai": os.getenv("OPENAI_API_KEY"),
    "anthropic": os.getenv("ANTHROPIC_API_KEY")
})

@mcp.tool()
async def generate_content(
    prompt: str,
    provider: str,
    model: str,
    ctx: Context
) -> str:
    """Generate content using specified LLM provider"""
    
    config = ModelConfig(
        provider=Provider(provider),
        model_name=model,
        temperature=0.7
    )
    
    messages = [{"role": "user", "content": prompt}]
    
    await ctx.info(f"Generating with {provider}/{model}")
    response = await llm.agenerate(config, messages)
    
    return response.choices[0].message.content

Streaming to Client

// Next.js API Route for streaming
export async function POST(request: NextRequest) {
  const { prompt, provider, model } = await request.json();
  
  const encoder = new TextEncoder();
  const stream = new ReadableStream({
    async start(controller) {
      try {
        const response = await fetch('http://localhost:8000/mcp/stream', {
          method: 'POST',
          headers: { 'Content-Type': 'application/json' },
          body: JSON.stringify({ prompt, provider, model })
        });
        
        const reader = response.body?.getReader();
        if (!reader) return;
        
        while (true) {
          const { done, value } = await reader.read();
          if (done) break;
          controller.enqueue(value);
        }
        
        controller.close();
      } catch (error) {
        controller.error(error);
      }
    }
  });
  
  return new Response(stream, {
    headers: { 'Content-Type': 'text/event-stream' }
  });
}

7. UI/UX Architecture

Why shadcn/ui?

Installation

pnpm dlx shadcn@latest init

Add Essential Components

pnpm dlx shadcn@latest add button input textarea card dialog form dropdown-menu select

Mobile-First Layout

// components/responsive-layout.tsx
export function ResponsiveLayout({ children }: { children: React.ReactNode }) {
  return (
    <div className="min-h-screen bg-gray-50 dark:bg-gray-900">
      {/* Header */}
      <header className="sticky top-0 z-50 border-b bg-white dark:bg-gray-800">
        <div className="container mx-auto flex h-16 items-center justify-between px-4 sm:px-6 lg:px-8">
          <h1 className="text-lg sm:text-xl font-bold">Content Platform</h1>
          <nav className="flex gap-2">
            {/* Navigation items */}
          </nav>
        </div>
      </header>
      
      {/* Main Content */}
      <main className="container mx-auto p-4 sm:p-6 lg:p-8">
        {children}
      </main>
    </div>
  )
}

// Responsive grid
export function DocumentGrid({ documents }) {
  return (
    <div className="
      grid gap-4
      grid-cols-1
      sm:grid-cols-2 sm:gap-6
      lg:grid-cols-3 lg:gap-8
      xl:grid-cols-4
    ">
      {documents.map(doc => (
        <DocumentCard key={doc.id} document={doc} />
      ))}
    </div>
  )
}

Dark Mode Implementation

pnpm add next-themes

Provider Setup

// components/theme-provider.tsx
"use client"

import * as React from "react"
import { ThemeProvider as NextThemesProvider } from "next-themes"

export function ThemeProvider({
  children,
  ...props
}: React.ComponentProps<typeof NextThemesProvider>) {
  return <NextThemesProvider {...props}>{children}</NextThemesProvider>
}

Root Layout

// app/layout.tsx
import { ThemeProvider } from "@/components/theme-provider"

export default function RootLayout({ children }) {
  return (
    <html lang="en" suppressHydrationWarning>
      <body>
        <ThemeProvider
          attribute="class"
          defaultTheme="system"
          enableSystem
          disableTransitionOnChange
        >
          {children}
        </ThemeProvider>
      </body>
    </html>
  )
}

Theme Toggle

"use client"

import { Moon, Sun } from "lucide-react"
import { useTheme } from "next-themes"
import { Button } from "@/components/ui/button"

export function ThemeToggle() {
  const { theme, setTheme } = useTheme()

  return (
    <Button
      variant="ghost"
      size="icon"
      onClick={() => setTheme(theme === "dark" ? "light" : "dark")}
    >
      <Sun className="h-5 w-5 rotate-0 scale-100 transition-all dark:-rotate-90 dark:scale-0" />
      <Moon className="absolute h-5 w-5 rotate-90 scale-0 transition-all dark:rotate-0 dark:scale-100" />
      <span className="sr-only">Toggle theme</span>
    </Button>
  )
}

Content Editor UI

// app/editor/[id]/page.tsx
import { Editor } from "@/components/editor"
import { Button } from "@/components/ui/button"
import { Card } from "@/components/ui/card"

export default function EditorPage({ params }: { params: { id: string } }) {
  return (
    <div className="min-h-screen">
      {/* Toolbar */}
      <div className="border-b bg-white dark:bg-gray-800 sticky top-0 z-40">
        <div className="container mx-auto flex items-center justify-between p-4">
          <div className="flex gap-2">
            <Button variant="outline" size="sm">
              <Save className="h-4 w-4 mr-2" />
              Save
            </Button>
            <Button variant="outline" size="sm">
              Export
            </Button>
          </div>
        </div>
      </div>
      
      {/* Editor */}
      <div className="container mx-auto max-w-4xl py-8">
        <Card className="p-6">
          <Editor documentId={params.id} />
        </Card>
      </div>
    </div>
  )
}

Tailwind Configuration

// tailwind.config.js
module.exports = {
  darkMode: 'selector',
  content: [
    './src/app/**/*.{js,ts,jsx,tsx,mdx}',
    './src/components/**/*.{js,ts,jsx,tsx,mdx}',
  ],
  theme: {
    extend: {
      colors: {
        brand: {
          50: '#f0f9ff',
          // ... custom palette
        },
      },
    },
  },
  plugins: [
    require('@tailwindcss/typography'),
    require('@tailwindcss/forms'),
  ],
}

8. Project Structure

Why Monorepo?

Structure

content-platform/
├── apps/
│   ├── web/                    # Next.js 15 frontend
│   │   ├── app/
│   │   ├── components/
│   │   ├── lib/
│   │   ├── package.json
│   │   └── next.config.mjs
│   └── desktop/                # Tauri wrapper
│       ├── src-tauri/
│       └── package.json
├── packages/
│   ├── python-mcp/             # Python MCP server
│   │   ├── src/
│   │   │   ├── server.py
│   │   │   ├── tools/
│   │   │   └── database.py
│   │   ├── pyproject.toml
│   │   └── uv.lock
│   ├── shared-types/           # Shared TypeScript types
│   └── ui/                     # Shared components
├── turbo.json
├── package.json
├── pnpm-workspace.yaml
└── README.md

Root package.json

{
  "name": "content-platform",
  "private": true,
  "scripts": {
    "dev": "turbo run dev --parallel",
    "build": "turbo run build",
    "lint": "turbo run lint"
  },
  "devDependencies": {
    "turbo": "^2.0.0"
  }
}

turbo.json

{
  "$schema": "https://turbo.build/schema.json",
  "tasks": {
    "dev": {
      "cache": false,
      "persistent": true
    },
    "build": {
      "dependsOn": ["^build"],
      "outputs": ["dist/**", ".next/**", "out/**"]
    },
    "lint": {
      "dependsOn": ["^lint"]
    }
  }
}

Why UV?

Setup

curl -LsSf https://astral.sh/uv/install.sh | sh
cd packages/python-mcp
uv init
uv add "mcp[cli]" fastmcp python-dotenv sqlalchemy[asyncio] aiosqlite

pyproject.toml

[project]
name = "content-mcp-server"
version = "0.1.0"
requires-python = ">=3.10"
dependencies = [
    "mcp[cli]>=1.4.0",
    "fastmcp>=2.0.0",
    "python-dotenv>=1.0.0",
    "sqlalchemy[asyncio]>=2.0.0",
    "aiosqlite>=0.20.0",
    "weasyprint>=62.0",
    "python-docx>=1.2.0",
    "litellm>=1.0.0"
]

[tool.uv]
dev-dependencies = [
    "pytest>=8.0.0",
    "ruff>=0.4.0"
]

9. Communication Between Frontend and MCP Server

Architecture Options

Option 1: Direct HTTP API (Recommended)

Next.js API Route → Python MCP Server

// app/api/mcp/route.ts
const MCP_SERVER_URL = process.env.MCP_SERVER_URL || 'http://localhost:8000';

export async function POST(request: NextRequest) {
  const body = await request.json();
  
  const response = await fetch(`${MCP_SERVER_URL}/mcp`, {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify(body),
  });
  
  return NextResponse.json(await response.json());
}

Option 2: WebSocket for Real-Time

For collaborative editing, real-time updates

# Python MCP server with WebSocket
from fastapi import FastAPI, WebSocket
from fastmcp import FastMCP

app = FastAPI()
mcp = FastMCP("Content Server")

@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
    await websocket.accept()
    while True:
        data = await websocket.receive_json()
        # Process and send back
        await websocket.send_json(response)

# Mount MCP
app.mount("/mcp", mcp.streamable_http_app())

State Management: Zustand

Why Zustand?

// lib/store/document-store.ts
import { create } from 'zustand';
import { devtools, persist } from 'zustand/middleware';

interface Document {
  id: string;
  title: string;
  content: string;
  updatedAt: Date;
}

interface DocumentState {
  documents: Map<string, Document>;
  currentDocumentId: string | null;
  
  loadDocument: (id: string) => Promise<void>;
  updateDocument: (id: string, content: string) => Promise<void>;
  saveDocument: (id: string) => Promise<void>;
}

export const useDocumentStore = create<DocumentState>()(
  devtools(
    persist(
      (set, get) => ({
        documents: new Map(),
        currentDocumentId: null,
        
        loadDocument: async (id) => {
          const response = await fetch(`/api/documents/${id}`);
          const doc = await response.json();
          
          set((state) => ({
            documents: new Map(state.documents).set(id, doc),
            currentDocumentId: id
          }));
        },
        
        updateDocument: async (id, content) => {
          const doc = get().documents.get(id);
          if (!doc) return;
          
          const updated = { ...doc, content, updatedAt: new Date() };
          
          set((state) => ({
            documents: new Map(state.documents).set(id, updated)
          }));
        },
        
        saveDocument: async (id) => {
          const doc = get().documents.get(id);
          if (!doc) return;
          
          await fetch(`/api/documents/${id}`, {
            method: 'PUT',
            body: JSON.stringify(doc)
          });
        }
      }),
      { name: 'document-storage' }
    )
  )
);

File Upload Handling

Client Side

// components/image-uploader.tsx
'use client'
import { useState } from 'react';

export function ImageUploader({ onUpload }) {
  const [uploading, setUploading] = useState(false);
  
  const handleUpload = async (file: File) => {
    setUploading(true);
    
    const formData = new FormData();
    formData.append('file', file);
    
    try {
      const response = await fetch('/api/upload', {
        method: 'POST',
        body: formData
      });
      
      const { url } = await response.json();
      onUpload(url);
    } finally {
      setUploading(false);
    }
  };
  
  return (
    <input
      type="file"
      accept="image/*"
      onChange={(e) => e.target.files?.[0] && handleUpload(e.target.files[0])}
      disabled={uploading}
    />
  );
}

Server Side

// app/api/upload/route.ts
import { writeFile } from 'fs/promises';
import { join } from 'path';

export async function POST(request: NextRequest) {
  const formData = await request.formData();
  const file = formData.get('file') as File;
  
  // Validate
  if (!file) {
    return NextResponse.json({ error: 'No file' }, { status: 400 });
  }
  
  if (file.size > 5 * 1024 * 1024) { // 5MB limit
    return NextResponse.json({ error: 'File too large' }, { status: 400 });
  }
  
  // Save
  const bytes = await file.arrayBuffer();
  const buffer = Buffer.from(bytes);
  
  const path = join(process.cwd(), 'public/uploads', file.name);
  await writeFile(path, buffer);
  
  const url = `/uploads/${file.name}`;
  
  return NextResponse.json({ url });
}

10. Setup & Deployment

Complete Initialization

# 1. Create monorepo
mkdir content-platform && cd content-platform
pnpm init
pnpm add -D turbo

# 2. Setup Next.js
mkdir -p apps/web
cd apps/web
npx create-next-app@latest . --typescript --app --tailwind --use-pnpm
cd ../..

# 3. Setup Python MCP
mkdir -p packages/python-mcp
cd packages/python-mcp
uv init
uv add "mcp[cli]" fastmcp python-dotenv sqlalchemy[asyncio] aiosqlite
cd ../..

# 4. Setup Tauri
cd apps
mkdir desktop && cd desktop
pnpm add -D @tauri-apps/cli
pnpm tauri init

# 5. Initialize shadcn/ui
cd ../web
pnpm dlx shadcn@latest init
pnpm dlx shadcn@latest add button input textarea card dialog form

# 6. Install all dependencies
cd ../../
pnpm install

Development Workflow

# Start all services
pnpm dev

# Or individually
pnpm dev --filter=web          # Next.js only
pnpm dev --filter=python-mcp   # MCP server only
pnpm dev --filter=desktop      # Tauri only

Building for Production

Next.js

cd apps/web
pnpm build

Tauri Desktop

macOS:

cd apps/desktop
pnpm tauri build -- --target universal-apple-darwin

Windows:

pnpm tauri build -- --target x86_64-pc-windows-msvc

Linux:

pnpm tauri build

Distribution Strategy

Desktop Apps:

Auto-Updates:

Configure in tauri.conf.json:

{
  "updater": {
    "active": true,
    "endpoints": ["https://your-update-server.com/releases"],
    "dialog": true,
    "pubkey": "YOUR_PUBLIC_KEY"
  }
}

GitHub Actions for Releases:

# .github/workflows/release.yml
name: Release
on:
  push:
    tags:
      - 'v*'

jobs:
  release:
    strategy:
      matrix:
        platform: [macos-latest, ubuntu-latest, windows-latest]
    runs-on: $
    
    steps:
      - uses: actions/checkout@v3
      - uses: pnpm/action-setup@v2
      - uses: actions/setup-node@v3
      - uses: dtolnay/rust-toolchain@stable
      
      - name: Install dependencies
        run: pnpm install
      
      - name: Build Tauri app
        uses: tauri-apps/tauri-action@v0
        env:
          GITHUB_TOKEN: $
        with:
          tagName: $
          releaseName: 'Release $'

Complete Architecture Diagram

+--------------------------------------------------------------+
|                     User Interface Layer                     |
|                                                              |
|  Next.js 15 + React 19 (App Router)                          |
|    - TipTap rich text editor                                 |
|    - shadcn/ui component library                             |
|    - Tailwind CSS styling                                    |
|    - Zustand state management                                |
|    - Dark mode via next-themes                               |
|                                                              |
|  Tauri 2.0 desktop wrapper (optional)                        |
+--------------------------------------------------------------+
                          | HTTP/WebSocket
+--------------------------------------------------------------+
|                    Application Layer                         |
|                                                              |
|  Python MCP server                                           |
|    - FastMCP framework                                       |
|    - Async tool registry (generate, export, save)            |
|    - Streamable HTTP transport                               |
|                                                              |
|  Multi-provider LLM manager                                  |
|    - LiteLLM interface                                       |
|    - Retries, rate limiting, streaming                       |
+--------------------------------------------------------------+
                          |
+--------------------------------------------------------------+
|                      Data Layer                              |
|                                                              |
|  SQLAlchemy 2.0 + SQLite (async, WAL)                        |
|    - Documents, versions, folders                            |
|                                                              |
|  Document export pipeline                                    |
|    - PDF (WeasyPrint)                                        |
|    - DOCX (python-docx)                                      |
|    - PPTX (python-pptx)                                      |
+--------------------------------------------------------------+

Technology Stack Summary

Frontend

Backend

Desktop

Development


Key Recommendations

1. Project Structure

Use Turborepo monorepo for unified workflow and code sharing

2. Python Tooling

Use UV (10-100x faster than pip, built-in Python management)

3. Rich Text Editor

Use TipTap (best balance of features, performance, extensibility)

4. UI Components

Use shadcn/ui (full customization, excellent accessibility)

5. State Management

Use Zustand (minimal boilerplate, great performance)

6. LLM Integration

Use LiteLLM (unified interface, 100+ providers, built-in retry)

7. Database

SQLAlchemy 2.0 + SQLite with async, WAL mode, versioning

8. Document Export

Multi-format pipeline with WeasyPrint (PDF), python-docx (DOCX)

9. Desktop Distribution

Tauri 2.0 (smallest bundle, native performance)

10. Development Workflow

Turbo for parallel dev servers, GitHub Actions for releases


Timeline Estimate

Total: 4-7 weeks for MVP


MCP

Next.js & Tauri

UI/UX

Rich Text

Backend

Tooling


Implementation Snapshot

The repository now reflects the architecture above. Key components:

End-to-End Workflow

pnpm install                     # install all workspaces
pnpm --filter web dev            # run web client
pnpm --filter desktop dev        # launch Tauri shell (spawns web dev server)

# Python MCP server
cd packages/python-mcp
python -m venv .venv && . .venv/Scripts/Activate.ps1  # adjust for platform
pip install -e .[dev]
cp .env.example .env
python -m penpal_mcp.server

Exports follow the design system: run pnpm --filter web build then pnpm --filter desktop build to package the desktop app. Stored documents persist in sqlite+aiosqlite:///./penpal_write_spot.db by default and export artifacts land in packages/python-mcp/exports/.

Container Orchestration

A Docker Compose workflow now launches the web client and MCP server together.

docker compose up --build

Named volumes cache pnpm’s store and node modules to keep rebuilds fast, while the repository is mounted read/write for live editing. The Tauri desktop wrapper remains outside of Docker because it requires a native windowing environment; build it locally after exporting the web app.


Conclusion

This comprehensive technical architecture provides a production-ready foundation for building a professional hybrid content writing platform. The stack combines:

All recommendations are based on 2025 standards, official documentation, and proven production patterns. The monorepo structure with UV, Next.js 15, TipTap, and Tauri 2.0 represents the state-of-the-art stack, optimizing for developer experience, performance, and maintainability.

Start building with this architecture and you’ll have a scalable, professional content creation platform ready for production deployment.