Edit the description of the AI Evaluations category:
This subcategory evaluates AI models (e.g., Claude, Grok, GPT series) for their fit with CORTEX—a personalized AI that internalizes our codebase, methodology, and thinking. We assess abilities like context handling, exportability for LOGOS imports, alignment with NEXUS principles (e.g., event modeling support), limitations (e.g., refusals, token windows), and integration potential.
Use wiki topics for overviews/comparisons, and replies for discoveries/tests (treat as events that project into wiki updates). Cross-link to REQUIREMENTS for pipeline refinements or Insights for broader learnings.
Linked from: