Troubleshooting
AI Model Configuration Issues
API Key Configuration
âś… Verification Checklist:
- Ensure API keys are:
- Entered in the correct input fields
- Currently valid (not expired/revoked)
- Created through official provider portals
- Confirm keys match the intended service:
- OpenAI keys for OpenAI models
- Anthropic keys for Claude models
- Service-specific keys for other integrations
🔄 Refresh Requirement:
Some model endpoints require a full page reload (Ctrl+F5/Cmd+R) after key entry to initialize connections.
Model Availability
âť— Important Note:
Nexalexica operates as an AI interface platform - we do not host proprietary models. All model capabilities depend on valid third-party API credentials.
Local Model Configuration
LM Studio Integration
For local models via LM Studio:
- Ensure LM Studio is running with:
- Active model loaded
- Server endpoint enabled (
http://localhost:1234
)
- Consult our LM Studio API Documentation
Ollama via Docker (Recommended)
For Ollama deployments:
Recommended Docker setup
docker run -d -p 11434:11434 --name ollama ollama/ollama
- Verify container status:
docker ps --filter "name=ollama"
- Pull required models:
ollama pull llama2
- Review our Ollama API Guide
Common Error Resolution
Symptom | Possible Cause | Resolution |
---|---|---|
”Invalid API Key” | Expired/revoked key Incorrect key format | Regenerate key Verify key prefix |
”Endpoint Unavailable” | Local service not running Firewall blocking | Check service status Verify port access |
Slow Responses | Model not loaded Hardware limitations | Confirm model status Check system resources |
Last updated on