Turn writing, speaking, listening, and reading submissions into standardized scores, teacher-reviewable feedback, and audit-ready logs — instantly and at scale.
The sandbox above runs on synthetic demo data. The dashboard logs below reflect anonymized production traffic from our active pilot.
"SKYONOMY reduced our moderation review time from 45 minutes per class down to 10 minutes, allowing our teachers to focus on instruction rather than grading."
— Head of Academics, Willing Academy
Pilot metrics shown with customer approval. No student names or identifiers are displayed in logs.
Auto-scales stateless inference APIs for bursty exam-time workloads
Runs scoring and feedback generation
Cloud Audit Logs (Cloud Logging) → exported to BigQuery for analytics
Region-bound document storage
Supporting controls: IAM least privilege, CMEK where required, Secret Manager, Cloud Logging, and proactive budget alerts.
Credits let us expand from 1 pilot to multiple institutions by funding AI inference, serverless compute during peak exam cycles, audit-log analytics, and region-specific deployments for schools with data residency requirements.
| Policy Area | Implementation |
|---|---|
| Data Stored | Pseudonymized submissions (direct identifiers replaced with institution-scoped IDs), rubric scores, feedback text, audio files |
| Excluded from Logs | Structured logs only; no raw submissions in logs. Model providers may retain prompts for abuse monitoring for up to 90 days (opt-out available). |
| Retention Default | 90 Days (configurable per institution) |
| Deletion Process | Automated via Storage lifecycle and BigQuery expiration; some platform recovery windows (soft delete/time travel) apply. |
| Regions Supported | APAC deployment: Seoul (asia-northeast3) and Singapore (asia-southeast1). Vietnam customers are served from Singapore unless in-country processing becomes available. |
| Usage Control (Minors) | Student-facing access is restricted per applicable model terms; deployments for minors are teacher-mediated unless a compliant alternative is used. |
We use cloud credits as prepaid budget to cover usage of specific Google Cloud services that power scoring and feedback. Credits are applied to our Google Cloud billing account and are consumed as these services are used (they are not cash and do not change list prices).
Each institution receives an agreed credit pool of [X credits] for [period]. Credit pools are tracked per institution environment and reported monthly.
What counts as usage: We count usage based on underlying cloud meters (tokens, compute seconds, storage). We provide a monthly usage report detailing submissions processed, average tokens per submission, peak traffic, and retention footprint.
Our team combines technical expertise in scalable AI infrastructure with deep operational experience in Asian language markets. We know exactly what school administrators require to deploy AI safely in high-stakes testing environments, and we've built the exact stack to deliver it.
Co-Founder & CTO
Educator and full-stack developer building AI-powered assessment systems for schools. Lead architect for the platform's multi-step forensic evaluation pipeline.
LinkedIn Profile
Co-Founder & Head of Strategy
Experience in Southeast Asian education markets and school partnerships. Focused on curriculum alignment and institutional expansion in Korea and Vietnam.
LinkedIn Profile