An AI-First Educational Infrastructure by Oakdale Systems

Rubric-aligned AI assessment for language schools

Turn writing, speaking, listening, and reading submissions into standardized scores, teacher-reviewable feedback, and audit-ready logs — instantly and at scale.

Active pilot12k+ exams processedBuilt on Google Cloud
LIVE_INFERENCE_CONSOLE
Engine Online

The sandbox above runs on synthetic demo data. The dashboard logs below reflect anonymized production traffic from our active pilot.

1
Active Pilot
12k+
Exams Processed
860
Enrolled Students
18m → 3m
Scoring Turnaround
-78%
Moderation Workload
SG/KR/EU
Regional Data Options
console.skyonomy.io/live-ops
Pilot Status
Willing Academy
Active
Willing Academy
Sindorim, Seoul
Traction
Enrolled Students 800
Exams Processed 12k+
Pilot Outcome

"SKYONOMY reduced our moderation review time from 45 minutes per class down to 10 minutes, allowing our teachers to focus on instruction rather than grading."

— Head of Academics, Willing Academy

Live Pilot Inference Log

Pilot metrics shown with customer approval. No student names or identifiers are displayed in logs.

Live Pilot Production Log
No live traffic at this moment.

Google Cloud services used

CR

Cloud Run

Auto-scales stateless inference APIs for bursty exam-time workloads

VX

Vertex AI

Runs scoring and feedback generation

BQ

BigQuery

Cloud Audit Logs (Cloud Logging) → exported to BigQuery for analytics

CS

Cloud Storage

Region-bound document storage

Supporting controls: IAM least privilege, CMEK where required, Secret Manager, Cloud Logging, and proactive budget alerts.

Why credits matter

Credits let us expand from 1 pilot to multiple institutions by funding AI inference, serverless compute during peak exam cycles, audit-log analytics, and region-specific deployments for schools with data residency requirements.

How credits will be used (6-12 mos)

  • Expand from 1 pilot to 5+ partner institutions
  • Process 50,000+ submissions per month
  • Fund inference tokens, scoring logs, and analytics
  • Deploy regional environments (Singapore, Korea, EU)

Security and Data Handling

Policy Area Implementation
Data Stored Pseudonymized submissions (direct identifiers replaced with institution-scoped IDs), rubric scores, feedback text, audio files
Excluded from Logs Structured logs only; no raw submissions in logs. Model providers may retain prompts for abuse monitoring for up to 90 days (opt-out available).
Retention Default 90 Days (configurable per institution)
Deletion Process Automated via Storage lifecycle and BigQuery expiration; some platform recovery windows (soft delete/time travel) apply.
Regions Supported APAC deployment: Seoul (asia-northeast3) and Singapore (asia-southeast1). Vietnam customers are served from Singapore unless in-country processing becomes available.
Usage Control (Minors) Student-facing access is restricted per applicable model terms; deployments for minors are teacher-mediated unless a compliant alternative is used.

Cloud Credits and Usage Transparency

We use cloud credits as prepaid budget to cover usage of specific Google Cloud services that power scoring and feedback. Credits are applied to our Google Cloud billing account and are consumed as these services are used (they are not cash and do not change list prices).

What credits cover

  • AI scoring and feedback inference on Vertex AI (metered by input and output tokens).
  • Serverless request processing on Cloud Run (metered by vCPU-seconds, memory-seconds, and request count).
  • Storage of submitted artifacts in Cloud Storage (metered by GB-month and operations).
  • Logging and analytics (Cloud Logging ingestion/retention, and analytics queries).

Allocation & Metering

Each institution receives an agreed credit pool of [X credits] for [period]. Credit pools are tracked per institution environment and reported monthly.

What counts as usage: We count usage based on underlying cloud meters (tokens, compute seconds, storage). We provide a monthly usage report detailing submissions processed, average tokens per submission, peak traffic, and retention footprint.

Leadership Team

Our team combines technical expertise in scalable AI infrastructure with deep operational experience in Asian language markets. We know exactly what school administrators require to deploy AI safely in high-stakes testing environments, and we've built the exact stack to deliver it.

Chris West

Christopher West

Co-Founder & CTO

Educator and full-stack developer building AI-powered assessment systems for schools. Lead architect for the platform's multi-step forensic evaluation pipeline.

LinkedIn Profile
David Prendergast

David Prendergast

Co-Founder & Head of Strategy

Experience in Southeast Asian education markets and school partnerships. Focused on curriculum alignment and institutional expansion in Korea and Vietnam.

LinkedIn Profile