Skip to main content
This guide demonstrates how to combine the GravixLayer Sandbox SDK with the Inference SDK to build a simple code-executing agent.

Prerequisites

  • GravixLayer SDK installed (pip install gravixlayer)
  • API Key configured

Example: Python Code Interpreter

This example shows a simple loop where an LLM generates Python code to solve a problem, and the Sandbox executes it.
# pip install gravixlayer
from gravixlayer import GravixLayer, Sandbox
import re

# Create Gravix client
client = GravixLayer()

system = "You are a helpful assistant that can execute python code. Only respond with the code to be executed and nothing else. Strip backticks in code blocks."
prompt = "Calculate how many r's are in the word 'strawberry'"

# Send messages to Gravix Inference API
response = client.chat.completions.create(
    model="meta-llama/llama-3.1-8b-instruct",
    messages=[
        {"role": "system", "content": system},
        {"role": "user", "content": prompt}
    ]
)

# Extract the code from the response
llm_content = response.choices[0].message.content

# Clean up code block formatting (strip backticks)
code = llm_content.replace("```python", "").replace("```", "").strip()

print(f"Generated Code:
{code}
")

# Execute code in Gravix AgentBox
if code:
    # Create a sandbox
    sandbox = Sandbox.create()
    print(f"Created Sandbox: {sandbox.sandbox_id}")
    
    # Run the code
    execution = sandbox.run_code(code)
    
    # Print results
    if execution.logs['stdout']:
        print("Output:", execution.logs['stdout'])
    if execution.logs['stderr']:
        print("Error:", execution.logs['stderr'])