Skip to main content

Changelog

New features, improvements, and fixes across AINative Studio, ZeroDB, and the AI Kit ecosystem.

v2.9.1Reliability

Platform Stability & Account Management Improvements

Improved account provisioning reliability, API gateway stability, and expanded sandbox limits for enterprise users.

  • Improved new account onboarding: subscriptions and roles now provisioned automatically on all signup flows
  • API gateway routing improvements for faster, more reliable endpoint resolution
  • Sandbox environment limits expanded for Pro and Enterprise plans
  • Enterprise dashboard now displays accurate storage usage metrics
  • Content Security Policy updates for improved third-party integration support
v2.9.0Platform

Dashboard Enhancements & Developer Tools

Comprehensive dashboard improvements across Sessions, Storage, AI Usage, Earnings, and MCP Hosting pages with real-time data integration.

  • Sessions page: view and manage AI conversation sessions with memory context and statistics
  • Storage page: streamlined project selection and improved file upload experience
  • AI Usage dashboard: enhanced charts for model usage, daily trends, and cost breakdown
  • Developer Earnings: full earnings overview, transaction history, and payout scheduling
  • MCP Hosting: improved server deployment flow and instance management
  • ZeroMemory API: new endpoints for remember, recall, reflect, and knowledge graph operations
  • Documentation: updated API Reference links and Getting Started guide
  • Events page: added conversion tracking for marketing campaigns
v2.8.5Knowledge Graph

Context Graph: GraphRAG & Knowledge Graph for ZeroDB

New knowledge graph capabilities for ZeroDB with ontology-aware entity resolution, edge versioning, and graph templates for common use cases.

  • Context Graph API: create entities, edges, and traverse relationship chains
  • GraphRAG: hybrid vector + graph search for more accurate retrieval
  • Knowledge Graph visualization in the ZeroDB Console
  • Graph templates for common patterns: user profiles, project dependencies, agent memory
  • Ontology support with edge versioning for structured knowledge representation
v2.8.2ZeroMemory

ZeroMemory Recall Improvements & Benchmarks

Enhanced vector similarity search in ZeroMemory with improved scoring accuracy and richer recall results for AI agents.

  • Vector similarity search integrated into ZeroMemory remember and recall flows
  • LongMemEval benchmark suite for measuring long-term memory retrieval quality
  • Improved memory scoring accuracy for temporal decay and recency weighting
  • Recall results now include full metadata for richer agent context
  • ZeroDB namespace management improvements and auto-provisioning
v2.8.0Enterprise

Enterprise Security Suite & Audit Improvements

Major enterprise-grade release with expanded security controls, audit logging enhancements, and compliance reporting.

  • AX audit logging now captures full request context, including IP, user agent, and session metadata
  • New compliance dashboard for SOC 2 and GDPR audit trails in the admin panel
  • Role-based access control (RBAC) expanded with fine-grained scopes for API key management
  • Organization-level SSO improvements: SAML 2.0 and OIDC providers now support JIT provisioning
  • Audit export API added — export structured logs in CSV or JSON for external SIEM systems
v2.7.5ZeroMemory

ZeroMemory v2: Cognitive Memory for AI Agents

ZeroMemory now supports semantic recall, temporal decay scoring, and graph-based memory relationships — giving agents persistent, intelligent memory across sessions.

  • New `/memory/v2/recall` endpoint with hybrid vector + keyword retrieval
  • Temporal decay scoring: memories naturally deprioritize over time unless reinforced
  • Memory graph API: relate memories to entities and traverse relationship chains
  • Reflection endpoint: agents can summarize and compress past memories into condensed knowledge
  • Profile endpoint for per-user and per-agent preference storage
  • Improved cross-collection memory lookups for multi-agent environments
v2.7.0New

Webhook Event Dispatcher

Platform-wide webhook delivery system for real-time event notifications on agent runs, memory updates, and billing events.

  • Webhook endpoints configurable per organization from the developer settings dashboard
  • Delivery retries with exponential backoff (up to 5 attempts)
  • Event types: agent.run.completed, memory.stored, billing.credit.low, api.key.created
  • HMAC-SHA256 payload signing for secure delivery verification
  • Webhook delivery logs available in the developer dashboard for the past 30 days
v2.6.8SEO

Discovery & SEO Alignment

Improved discoverability across the platform and ecosystem documentation.

  • Updated llms.txt and agents.txt with ZeroMemory capabilities for AI crawler indexing
  • Added security.txt following RFC 9116 for responsible disclosure
  • Sitemap regeneration with priority weighting for product pages
  • Structured data (JSON-LD) expanded across product and documentation pages
v2.6.5Developer Program

Echo Developer Program: Usage-Based Revenue

Developers can now monetize their apps built on AINative APIs through the Echo Developer Program.

  • Set markup (0–40%) on API usage by your end users
  • AINative takes a flat 5% platform fee on all developer earnings
  • Stripe Connect integration for weekly automated payouts (minimum $10)
  • Earnings dashboard with per-app and per-user usage breakdowns
  • New React SDK hooks: useChat and useCredits for easy integration
  • Next.js SDK with server client and auth middleware for SSR apps
v2.6.0ZeroDB

ZeroDB File Storage (S3-Compatible API)

ZeroDB now includes an S3-compatible object storage layer for agent-accessible file management.

  • PUT, GET, DELETE file operations via REST API
  • Presigned URL generation for direct browser uploads
  • Per-user and per-organization storage quotas
  • Files indexed for semantic search alongside vector embeddings
  • MCP tool integration: agents can read and write files via the ZeroDB MCP server
v2.5.3Performance

Embedding Service: 16ms Inference

The embedded text embedding service has been re-architected around TEI (Text Embeddings Inference) for drastically reduced latency.

  • Average embedding inference down to 16ms from ~120ms
  • Batch embedding endpoint now supports up to 512 texts per request
  • New model: nomic-embed-text-v2-moe for higher-quality retrieval
  • Automatic model warm-up on cold starts to eliminate first-request latency spikes
v2.5.0Security

API Key Scoping & Expiry Controls

API keys can now be scoped to specific permissions and configured with automatic expiry dates.

  • Scoped keys: limit keys to specific endpoints (e.g., memory:read, embeddings:write)
  • Key expiry: set keys to expire after a defined number of days or on a specific date
  • Last-used tracking: see when each key was last authenticated
  • Key rotation endpoint: rotate a key without downtime by generating a successor key

Older entries are available in the developer documentation.