A modern learning platform that uses AI-driven Socratic questioning to deepen understanding and facilitate learning through structured dialogue.
- Runtime: Node.js 18.x
- Language: TypeScript
- Framework: Serverless Framework v4
- Database: PostgreSQL 15
- ORM: TypeORM
- AI: LangChain with OpenAI
- Search: SerpApi for web search
- Web Interface: Express with EJS templates
- Queue: ElasticMQ (SQS compatible)
- Monitoring: Custom metrics dashboard
- Type Safety: Zod schemas
- Storage: AWS S3 for search results
├── src/
│ ├── entities/ # TypeORM entities
│ │ ├── Topic.ts
│ │ ├── Question.ts
│ │ ├── Reflection.ts
│ │ ├── Clarification.ts
│ │ ├── QueryPreparation.ts
│ │ ├── SearchResult.ts
│ │ └── CrawlRequest.ts
│ ├── handlers/ # Queue handlers
│ │ ├── TopicHandler.ts
│ │ ├── QuestionHandler.ts
│ │ ├── ReflectionHandler.ts
│ │ ├── ClarificationHandler.ts
│ │ ├── QueryPreparationHandler.ts
│ │ └── SearchHandler.ts
│ ├── services/ # Core services
│ │ ├── OpenAIService.ts
│ │ ├── QueueService.ts
│ │ ├── LoggerService.ts
│ │ ├── MonitoringService.ts
│ │ ├── SerpApiService.ts
│ │ └── FireCrawlService.ts
│ ├── web/ # Web interface
│ │ ├── routes/
│ │ ├── views/
│ │ └── public/
│ ├── config/ # Configuration
│ ├── types/ # TypeScript types
│ └── utils/ # Utility functions
├── queue-config/ # Queue configuration
├── serverless.yml # Serverless config
└── docker-compose.yml # Docker services
- Structured Socratic questioning using OpenAI
- Type-safe AI responses with Zod schemas
- Progressive learning paths
- Automated follow-up questions
- Intelligent web search using SerpApi
- Content crawling and analysis
- S3 storage for search results
- Webhook integration for async processing
- Real-time metrics dashboard
- Queue monitoring
- Learning progress visualization
- Interactive learning sessions
- Topic Creation: Initial learning topics
- Question Generation: AI-driven Socratic questions
- Reflection Analysis: Understanding assessment
- Clarification: Targeted follow-up questions
- Query Preparation: Research guidance
- Search: Web content discovery
- Crawl: Deep content analysis
- Zod schemas for AI responses
- TypeScript throughout
- Runtime validation
- Structured data flow
The platform includes a custom automated evaluation framework for Socratic question generation using OpenAI Evals. This framework ensures reproducible, version-controlled evaluations of AI-generated questions.
The evaluation system is located in the /evals directory:
├── evals/
│ ├── evaluations.json # Defines evaluations (criteria, schema, test data, prompts)
│ ├── evaluation_hashes.json # Tracks changes to evaluation components
│ ├── evaluations_metadata.json # Stores metadata (UUIDs, file IDs, run IDs)
│ ├── index.ts # Entry point
│ ├── MetaDataConfigManager.ts # Manages evaluation metadata and hashes
│ ├── EvaluationManager.ts # Handles evaluation creation and runs
│ ├── evaluator.ts # Runs evaluations
│ ├── EvaluationSyncer.ts # Syncs changes and triggers updates
│ ├── JSONFileStorage.ts # Manages file storage
│ ├── EvaluationSystem.ts # Orchestrates the evaluation process
│ └── EvaluationLogger.ts # Logs evaluation events
-
Define Evaluations:
Editevaluations.jsonto define your evaluation criteria, schema, test data, and prompts. -
Track Changes:
The framework hashes each component (criteria, test data, schema, prompts) and tracks changes inevaluation_hashes.json. -
Upload Test Data:
Test data is converted to JSONL and uploaded to OpenAI as a file. The file ID is stored inevaluations_metadata.json. -
Create/Update Evaluations:
The framework calls the OpenAI Evals API to create or update evaluations based on changes detected. -
Run Evaluations:
Evaluation runs are triggered, and results are tracked inevaluations_metadata.json. -
Version Control:
All UUIDs, file IDs, and hashes are versioned for reproducibility. Do use DVC for large files.
- Node.js 18.x or 20.x (required for file uploads)
- OpenAI API key
Add the following to your .env file:
OPENAI_API_KEY=your_openai_api_key-
Define Evaluations:
Editevals/evaluations.jsonto define your evaluations. Example:{ "question_generation": { "criteria": [...], "schema": {...}, "testData": [...], "targetPrompt": {...} } } -
Run the Evaluation System:
Execute the following command to sync and run evaluations:npx ts-node ./evals/index.ts
-
Monitor Results:
Checkevaluations_metadata.jsonfor evaluation results and logs.
- DVC as an option
- Commit
evaluations.json,evaluation_hashes.json, andevaluations_metadata.jsonto version control.
- Clone and Install
git clone <repository-url>
cd <project-directory>
npm install- Environment Setup
Create a
.envfile:
# API Keys
OPENAI_API_KEY=your_openai_api_key
SERP_API_KEY=your_serpapi_key
FIRECRAWL_API_KEY=your_firecrawl_key
# Database
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_DB=myapp
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
# AWS Configuration
AWS_REGION=us-east-1
S3_BUCKET=your-bucket-name
# Queue Configuration
QUEUE_ENDPOINT=http://localhost:9324
QUEUE_REGION=elasticmq
QUEUE_ACCESS_KEY_ID=root
QUEUE_SECRET_ACCESS_KEY=root
# Webhook Configuration
FC_WEBHOOK=your_webhook_url- Start Services
# Start core services
npm run dev
# Expose webhook endpoint (in a separate terminal)
npm run expose:webhook- Run Evaluations
npx ts-node ./evals/index.ts-
Core Commands
npm run dev- Start all servicesnpm run build- Build TypeScriptnpm run start- Start Serverless offline
-
Service Management
npm run services:up- Start Docker servicesnpm run services:down- Stop servicesnpm run services:clean- Clean volumes
-
Webhook Development
npm run expose:webhook- Expose local webhook endpoint via ngrok
The platform uses webhooks for asynchronous processing of crawled content. To set up webhooks:
- Start your local server:
npm run dev- In a separate terminal, expose your webhook endpoint:
npm run expose:webhook- Use the generated ngrok URL as your webhook endpoint in the SerpApi dashboard
The webhook endpoint will receive crawl results and process them automatically.
The platform includes a web-based monitoring dashboard at /metrics showing:
- Queue depths and processing rates
- Error rates and types
- Processing times
- Learning progress metrics
- Search and crawl statistics
Before deploying:
- Configure proper AWS credentials
- Set up production database
- Configure API keys (OpenAI, SerpApi, FireCrawl)
- Review resource allocations
- Set up monitoring alerts
- Configure production webhook endpoints
- Setting up evaluations all system prompts
- Templatise all prompts to OpenAI
- Create schemas for test data set
- Choose testing criteria for each eval
- Send request to evals API to create eval
- Save returned uuid for created Eval
- Create test data set as a JSON file according to defined schema
- Upload files via files API
- Save returned uuid for uploaded files
- Use File uuid and Eval uuid to create an Eval Run via the evals api
- Save returned uuid for eval run
- View results on OPENAI
- Use a metadata file to save all the uuids.
- Version Control metadata file
- Use DVC to store test data set (Do not commit test data to git)
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
ISC License




