API Reference
API Reference
Welcome to the TigerCity.ai API Reference. Below is an overview of all available endpoints.
How API Endpoints Work
API endpoints are URLs that your application can call to interact with TigerCity.ai services. Each endpoint performs a specific action, such as creating a model, getting a response, or managing your account. You make HTTP requests (GET, POST, DELETE, etc.) to these endpoints with your API token for authentication.
Obtaining an API Token
To use the TigerCity.ai API, you need an API token for authentication. Here's how to get one:
- Go to the Playground section
- Navigate to ApiKeys
- Click the (+) button to create a new API key
- Give it a name and you will get a dialog showing the new key
- Copy the key and set it in your environment
Important: When the key is generated, it is only shown once and will not be displayed again. Make sure to save it immediately, as you won't be able to retrieve it later. If you lose the key, you'll need to create a new one.
Example:
export TIGER_API_KEY=<your app key>Setting the API Key Permanently
Linux / macOS
Add the export command to your shell configuration file:
- For bash: Add to
~/.bashrc - For zsh: Add to
~/.zshrc
Open the file in a text editor and add:
export TIGER_API_KEY=<your_api_key_here>Then reload your shell configuration: source ~/.bashrc (or source ~/.zshrc for zsh)
Windows
For Command Prompt:
- Open System Properties → Advanced → Environment Variables
- Under "User variables", click "New"
- Variable name:
TIGER_API_KEY - Variable value: your API key
- Click OK and restart your terminal
For PowerShell:
Run:
[System.Environment]::SetEnvironmentVariable('TIGER_API_KEY', 'your_api_key_here', 'User')- Restart PowerShell for the changes to take effect
Streaming
Many endpoints support streaming responses for real-time feedback. Learn more about how streaming works and how to implement it in your application: Streaming Guide
Available Endpoints
Send a structured list of input messages with text, and the model will generate the next message in the conversation.
List all available models, both the base models (Llama-3.1-8b and DeepSeek-R1-8b) and all the private models created by the current user.
Create Model allows you to create a copy of an existing model (called the parent model). Once created, your new model is independent and can be customized through training. You can experiment with different training parameters, datasets, and fine-tuning techniques without any risk to the original parent model.
Note: You can only train models that you have created yourself. The standard base models (like Llama-3.1-8b or DeepSeek-R1-8b) are read-only, so you must first create a copy of these models before you can train them.
Delete a model that you have created. This permanently removes the model from your account.
Note: You can only delete models that you have created yourself. The standard base models (like Llama-3.1-8b or DeepSeek-R1-8b) are read-only and cannot be deleted.
Fine-tune a language model by adjusting its weights to make it more likely to produce the results you want. Training modifies the model's internal parameters based on your provided examples, enabling it to learn new behaviors, preferences, and response patterns.
To train the model, you provide a conversation context (messages) along with multiple possible responses, each with a reward value indicating how desirable that response is. The model learns from these examples by updating its weights to favor responses with higher rewards, gradually improving its ability to generate the types of outputs you prefer.
This process allows you to customize the model's behavior for specific tasks, styles, or preferences without needing to build a model from scratch.