Create a Container
Overview
The "Create a Container" API endpoint allows you to deploy a new container with the specified configurations. Use this endpoint to initiate a container deployment with parameters like model ID, GPU count, GPU type etc.
This endpoint handles container creation and returns a unique container ID in the response, which can be used for subsequent operations like status monitoring or deletion.
Example
- Method:
POST
Endpoint
https://api.gpulab.ai/container/deploy
Headers
Header | Description | Type |
---|---|---|
api-key | Your API Key for request authorization. | string |
Request Body Attributes
Parameter | Description | Type |
---|---|---|
model_id | ID of the model to deploy. | int |
gpu_count | Number of GPUs to allocate for the container. | int |
gpu_type | Type of GPU to deploy (e.g., NVIDIA GeForce RTX 3090 Ti, NVIDIA GeForce GT 710, NVIDIA GeForce RTX 4090, etc.). | str |
volume_container_identifier | Optional. Identifier for the volume container, if applicable. | Optional[str] |
environment_variables | Optional. Dictionary of environment variables to pass to the container. | Optional[Dict[str, str]] |
server_name | Optional. Custom name for the server. If not provided, a default name will be generated. | Optional[str] |
Request
Body (JSON)
{
"model_id": 2,
"gpu_count": 2,
"gpu_type": "NVIDIA GeForce RTX 3090 Ti",
"volume_container_identifier": "12345",
"server_name": "737d0da16915489",
"environment_variables": {
"DEBUG": "true",
"API_KEY": "your-api-key",
"SERVER_URL": "https://example.com"
}
}
Response (Success)
{
"status": "success",
"message": "Container Deploy Process Started in the background",
"container_id": null
}
Note: The actual container ID will be provided in a subsequent response once the deployment completes successfully.
Response (Error)
{
"status": "error",
"message": "Error message details"
}