OpenAI-compatible API server for Apple on-device models
A SwiftUI application that creates an OpenAI-compatible API server using Apple’s on-device Foundation Models. This allows you to use Apple Intelligence models locally through familiar OpenAI API endpoints.
Use it in any OpenAI compatible app:
.zip
filegit clone https://github.com/gety-ai/apple-on-device-openai.git
cd apple-on-device-openai
open AppleOnDeviceOpenAI.xcodeproj
This project is implemented as a GUI application rather than a command-line tool due to Apple’s rate limiting policies for Foundation Models:
“An app that has UI and runs in the foreground doesn’t have a rate limit when using the models; a macOS command line tool, which doesn’t have UI, does.”
— Apple DTS Engineer (Source)
⚠️ Important Note: You may still encounter rate limits due to current limitations in Apple FoundationModels. If you experience rate limiting, please restart the server.
⚠️ 重要提醒: 由于苹果 FoundationModels 当前的限制,您仍然可能遇到速率限制。如果遇到这种情况,请重启服务器。
127.0.0.1:11535
)Once the server is running, you can access these OpenAI-compatible endpoints:
GET /health
- Health checkGET /status
- Model availability and statusGET /v1/models
- List available modelsPOST /v1/chat/completions
- Chat completions (streaming and non-streaming)curl -X POST http://127.0.0.1:11535/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "apple-on-device",
"messages": [
{"role": "user", "content": "Hello, how are you?"}
],
"temperature": 0.7,
"stream": false
}'
from openai import OpenAI
# Point to your local server
client = OpenAI(
base_url="http://127.0.0.1:11535/v1",
api_key="not-needed" # API key not required for local server
)
response = client.chat.completions.create(
model="apple-on-device",
messages=[
{"role": "user", "content": "Hello, how are you?"}
],
temperature=0.7,
stream=True # Enable streaming
)
for chunk in response:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
You can use the included test script to verify the server is working correctly and see example usage patterns:
python3 test_server.py
The test script will:
Make sure the server is running before executing the test script. The script provides comprehensive examples of how to interact with the API using both direct HTTP requests and the OpenAI Python SDK.
This server implements the OpenAI Chat Completions API with the following supported parameters:
model
- Model identifier (use “apple-on-device”)messages
- Array of conversation messagestemperature
- Sampling temperature (0.0 to 2.0)max_tokens
- Maximum tokens in responsestream
- Enable streaming responses🤖 This project was mainly “vibe coded” using Cursor + Claude Sonnet 4 & ChatGPT o3.
This project is licensed under the MIT License - see the LICENSE file for details.