Fathership AI chat interface
Ruby on RailsOpenAIHotwirePostgreSQLTailwind CSS

Fathership

An AI-powered emotional support chatbot that acts as a warm, grounded father-figure mentor — offering judgment-free guidance, encouragement, and practical next steps.

Tech Stack

Ruby on Rails 7.1
Ruby 3.3
PostgreSQL
Redis
Hotwire (Turbo + Stimulus)
Tailwind CSS
OpenAI API
Devise
Docker

Project Overview

Fathership was a personal project I built in early 2024 as a vehicle for learning full-stack Rails development end to end — authentication, real-time UI, AI integration, and production deployment. The concept: an AI mentor that fills the role of a steady, supportive father figure, giving users a judgment-free space to talk through problems and leave with practical next steps. The app was built from scratch, deployed to production, and is now archived as a learning artifact.

The Challenge

Building a real-time chat app with a meaningful AI persona required more than wiring up an API — it meant designing a system prompt with genuine tone guidance, managing conversation context and cost, and handling failure states gracefully without leaving users stranded.

The Solution

Rails 7 with Hotwire for SPA-like interactivity without heavy JavaScript, a carefully crafted dynamic system prompt personalised to each user, configurable token and context limits for cost control, and safety guardrails that detect self-harm mentions and surface crisis resources.

The Outcome

A fully deployed, production-grade Rails app with user auth, persistent multi-thread chat history, real-time Turbo Stream updates, and graceful degradation when the OpenAI API is unavailable — all built as a solo learning project over a few months.

Project Visuals

Fathership chat interface showing AI conversation

The main chat interface with suggested prompts, real-time character counter, and persistent conversation history in the sidebar.

Development Process

Auth, Data Model & Core Chat

Started with Devise for user authentication and a straightforward data model: users own many chats, each chat stores its full conversation history as JSON. Built the core create and update flow for chats with Turbo Stream responses, giving instant UI feedback without full page reloads — Rails's Hotwire stack making real-time feel effortless.

AI Integration & Persona Design

Integrated the OpenAI API via the ruby-openai gem with a dynamic system prompt that personalises the mentor's voice to each user by name. Tuned the prompt with explicit tone rules — reflect, practical next step, gentle question — and capped responses at 70–150 words to keep conversations grounded. Added configurable context window depth, token limits, and retry logic with exponential backoff to keep costs predictable and the app resilient.

Safety, Polish & Deployment

Added pattern-matched self-harm detection that surfaces the 988 crisis hotline instead of a chatbot response. Built a Stimulus controller to handle auto-resizing text input, character counter feedback, keyboard shortcuts, and loading states. Containerised with Docker, added Redis for Action Cable in production, and deployed — including graceful degradation to canned supportive responses if the API key is missing or calls fail.

Key Features

Personalised AI Persona

A dynamic system prompt builds a mentor voice tailored to each user by name, with explicit rules for tone, response length, and conversational structure — producing warm, consistent replies rather than generic AI responses.

Real-Time Hotwire UI

Turbo Streams deliver message responses, chat creation, and deletion updates instantly without a full page reload. A Stimulus controller handles auto-resize, character limits, loading states, and keyboard shortcuts — all without reaching for a frontend framework.

Safety Guardrails

Pattern-matched self-harm detection intercepts sensitive messages and responds with crisis resources (988 hotline) instead of AI-generated content. The app never surfaces harmful, abusive, or manipulative guidance, and encourages professional support for medical, legal, or financial topics.

Cost-Conscious Architecture

Configurable context window depth, token limits, and model selection keep API costs predictable. Graceful degradation to canned supportive responses means the app remains functional even without an OpenAI key — and retry logic with exponential backoff handles transient failures silently.

Interested in working together?

Feel free to reach out — I'm always open to discussing new projects and opportunities.