Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.rixapi.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

Codex CLI is OpenAI’s official command-line AI coding agent. It lets you complete code generation, refactoring, debugging, file editing, and system command execution directly from your terminal using natural language. Compared to other AI coding tools, Codex CLI features full sandbox isolation and fine-grained permission control.

Installation

1

Check Node.js Version

node -v
Requires Node.js 22+.
2

Install Codex CLI

npm install -g @openai/codex

Configure ePhone AI

Codex CLI uses model_providers to define custom API providers, and the top-level model_provider key to specify which provider to use by default.
1

Set the API Key Environment Variable

echo 'export OPENAI_API_KEY="sk-your-api-key"' >> ~/.zshrc
source ~/.zshrc
echo 'export OPENAI_API_KEY="sk-your-api-key"' >> ~/.bash_profile
source ~/.bash_profile
[Environment]::SetEnvironmentVariable("OPENAI_API_KEY", "sk-your-api-key", "User")
2

Create the Configuration File

mkdir -p ~/.codex
Add the following to ~/.codex/config.toml:
# ~/.codex/config.toml

model = "gpt-4o"
model_provider = "ephone"       # Required: set the default provider
disable_response_storage = true

[model_providers.ephone]
name = "ePhone AI"
base_url = "https://api.ephone.ai/v1"
env_key = "OPENAI_API_KEY"      # Name of the env var (not the key itself)
wire_api = "responses"          # Responses API protocol (only supported value)
Three critical settings — all are required:
  • model_provider = "ephone" — without this, Codex routes known model names like gpt-4o directly to OpenAI’s servers
  • wire_api = "responses" — the only protocol supported by Codex, using the Responses API
  • env_key = "OPENAI_API_KEY" — this is the name of the environment variable, not the API key value itself

Getting Started & Approval Modes

codex
On first launch, choose an approval mode that controls how Codex handles permissions:
ModeConfig ValueDescription
SuggestuntrustedDefault. Codex can only read files — all writes and commands require your confirmation
Auto Executeon-requestSafe sandbox operations run automatically; only privilege escalation requires confirmation
Full AutoneverNo confirmation needed. Codex acts autonomously — best for trusted dev environments
Set the default approval mode in config.toml:
approval_policy = "on-request"

Common Commands

CLI Launch Options

CommandDescription
codexStart an interactive session
codex "task description"Run a one-off task directly
codex --model gpt-4o "task"Run a task with a specific model
codex --image screenshot.png "fix this bug"Attach an image as context
codex --search "task description"Enable web search for the task
codex --profile fast "quick task"Use a preset configuration profile

In-Session Commands

Use these slash commands during an interactive session:
CommandDescription
/statusView current model, token usage, and config info
/modelSwitch the active model
/approvalsAdjust the approval policy for the current session
/compactCompress conversation history to free context window
/clearClear the current conversation history
/initCreate an AGENTS.md template in the current directory
/feedbackSubmit feedback
Use CaseRecommended ModelNotes
Everyday codinggpt-4oGood balance of speed and capability
Complex refactoringo4-miniStrong reasoning for deep analysis tasks
Quick lightweight tasksgpt-4o-miniFast response, lower cost
Use the /model command to switch models mid-session without restarting.

Project Configuration: AGENTS.md

Codex automatically reads AGENTS.md files on startup as project context instructions, similar to Claude Code’s CLAUDE.md. Through layered configuration, you can set different rules for different projects and directories.

Discovery Order

  1. Global: ~/.codex/AGENTS.md — shared defaults for all projects
  2. Project root: <project-root>/AGENTS.md — project-level rules
  3. Subdirectories: <subdir>/AGENTS.md — subdirectory overrides
Each level also supports AGENTS.override.md, which takes priority over AGENTS.md in the same directory.

Global Configuration Example

# ~/.codex/AGENTS.md

## Working Agreements
- Prefer pnpm when installing dependencies
- Run relevant tests after modifying code
- Follow Conventional Commits for commit messages
- Ask for confirmation before adding production dependencies

Project Configuration Example

# AGENTS.md

## Project Info
This is a full-stack project using Go + React.

## Development Standards
- Backend uses Go 1.22+, follow standard project layout
- Frontend uses TypeScript, no any types allowed
- All user-facing strings must use i18n
- Run make build after backend changes
- Run npm run lint after frontend changes

## Testing Requirements
- New APIs must include unit tests
- Test command: go test ./... -v

Subdirectory Override Example

# services/payments/AGENTS.override.md

## Payment Service Rules
- Use make test-payments instead of the default test command
- All monetary calculations must use decimal types
- Never operate directly on the production database

Configuration Profiles

Profiles let you quickly switch between different configuration sets — ideal for different projects or scenarios:
# ~/.codex/config.toml

# Use the fast profile by default
profile = "fast"

[profiles.fast]
model_provider = "ephone"
model = "gpt-4o-mini"
approval_policy = "never"

[profiles.careful]
model_provider = "ephone"
model = "o4-mini"
approval_policy = "on-request"
model_reasoning_effort = "high"
Switch profiles at launch with --profile:
codex --profile careful "refactor the error handling in this module"
Each profile needs model_provider = "ephone" — otherwise switching profiles may fall back to the default OpenAI provider.

MCP Server Integration

Codex CLI supports MCP (Model Context Protocol) to connect external tools, significantly extending its Agent capabilities.
# ~/.codex/config.toml

[mcp_servers.context7]
command = "npx"
args = ["-y", "@context7/mcp"]

[mcp_servers.github]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]
env = { GITHUB_PERSONAL_ACCESS_TOKEN = "ghp_xxx" }
Once configured, Codex can query documentation, interact with GitHub, and more — without manually providing context.

Reasoning Control

For models that support reasoning (e.g., o4-mini), you can control reasoning depth:
model_reasoning_effort = "high"    # minimal | low | medium | high
model_reasoning_summary = "auto"   # auto | concise | detailed | none
Higher reasoning effort means deeper thinking but higher token usage. Use medium for daily tasks and high for complex refactoring. Codex supports web search during conversations to fetch the latest documentation and information:
web_search = "cached"   # disabled | cached | live
  • cached: Uses an OpenAI-maintained index (default, fast)
  • live: Fetches live web pages (most current info)
  • disabled: Turns off search entirely
You can also enable search at launch with --search:
codex --search "find the latest React 19 API changes"

Sandbox & Security

Codex provides three sandbox levels to protect your system:
ModeDescription
read-onlyRead-only mode — cannot modify files or run write commands
workspace-writeCan modify files within the workspace directory (default)
danger-full-accessFull access with no restrictions (use with caution)
sandbox_mode = "workspace-write"

Usage Examples

Code Refactoring

codex "convert all error handling in utils/helpers.go to use wrapped errors"

Bug Fixing with Screenshots

codex --image bug-screenshot.png "this button doesn't respond when clicked, help me locate and fix the issue"

Project Scaffolding

codex "create a Go + Gin REST API project scaffold with CRUD, middleware, and Docker config"

Git Operations

codex "review recent commit history, find the commit that introduced this bug, then fix it"

Code Review

codex "review recently modified files in src/services/, identify potential security issues and performance concerns"

Full Configuration Reference

# ~/.codex/config.toml

# Default model and provider (required)
model = "gpt-4o"
model_provider = "ephone"

# Approval policy
approval_policy = "on-request"

# Sandbox mode
sandbox_mode = "workspace-write"

# Reasoning control
model_reasoning_effort = "medium"

# Web search
web_search = "cached"

# Disable API response storage (required for third-party providers)
disable_response_storage = true

# Conversation history
[history]
persistence = "save-all"

# ePhone AI Provider
[model_providers.ephone]
name = "ePhone AI"
base_url = "https://api.ephone.ai/v1"
env_key = "OPENAI_API_KEY"
wire_api = "responses"

# Fast profile
[profiles.fast]
model_provider = "ephone"
model = "gpt-4o-mini"
approval_policy = "never"
model_reasoning_effort = "minimal"

# Deep thinking profile
[profiles.think]
model_provider = "ephone"
model = "o4-mini"
approval_policy = "on-request"
model_reasoning_effort = "high"

Troubleshooting

Cause: Missing model_provider = "ephone" in config. Codex routes known model names (like gpt-4o) to the built-in OpenAI provider.Fix: Ensure model_provider = "ephone" is set at the top level of config.toml.If you previously logged into Codex via ChatGPT, also clear cached credentials:
rm -f ~/.codex/auth.json
Cause: Missing model_provider configuration, causing Codex to route WebSocket connections to OpenAI’s servers instead of ePhone AI.Fix: Ensure model_provider = "ephone" is set at the top level of config.toml. This routes connections to wss://api.ephone.ai/v1/responses.
Cause: env_key should contain the name of the environment variable (e.g., "OPENAI_API_KEY"), not the API key value itself.Fix: Set env_key = "OPENAI_API_KEY" and make sure the variable is exported via export OPENAI_API_KEY="sk-...".
Cause: The profile doesn’t include model_provider, so it reverts to the default OpenAI provider.Fix: Add model_provider = "ephone" to every profile section.

Important Notes

Codex CLI’s autonomous mode calls the model frequently and executes system commands — token usage is much higher than regular chat. We recommend using a Tier 2 account to avoid hitting rate limits.
  • Set disable_response_storage = true to disable API response storage — required for third-party providers
  • Use the /compact command to compress overly long conversation history and prevent context window limits
  • Log files are located at ~/.codex/log/ — check them for troubleshooting

Official Codex Docs

OpenAI developer documentation

GitHub Repository

Source code & issues

AGENTS.md Spec

Project configuration spec