The GravixLayer SDK provides comprehensive memory capabilities, allowing you to store and search information for AI applications. Think of it as a smart database that remembers user preferences and conversations.
Public Preview: Gravix Layer is currently in Public preview. Features are experimental and may have issues or break as ongoing updates to API endpoints and models continue.
Memory operations allow you to:
- Add Memories: Store user preferences, conversation context, and important facts
- Search Memories: Find relevant information using semantic search capabilities
- List Memories: Retrieve all stored memories for a specific user or context
- Update Memories: Modify existing memory content and metadata
- Delete Memories: Remove outdated or unwanted memory entries
- Process Conversations: Automatically extract insights from chat interactions using AI
Prerequisites
Before managing files, you need to set up your API key:
API Key Required: You must export your GravixLayer API key in your terminal before using file operations. All file operations are tied to your API key and account.
Set your API key:
Windows (CMD)
Windows (PowerShell)
Linux/macOS
set GRAVIXLAYER_API_KEY=your_api_key_here
$env:GRAVIXLAYER_API_KEY="your_api_key_here"
export GRAVIXLAYER_API_KEY=your_api_key_here
Core Operations
AI Processing with infer Parameter
Simple Text: Stored exactly as provided
memory.add("I love pizza", user_id="alice") # Stored as: "I love pizza"
Conversations: AI extracts meaningful information from messages when infer=True (default)
messages = [
{"role": "user", "content": "I'm looking for a restaurant"},
{"role": "assistant", "content": "What cuisine do you prefer?"},
{"role": "user", "content": "I love spicy Thai food"}
]
# AI analyzes the conversation and extracts: "User prefers spicy Thai cuisine"
memory.add(messages, user_id="alice") # infer=True by default
Raw Storage (infer=False):
- Stores messages without extracting meaningful information
- No AI analysis, faster processing
- Preserves exact conversation content
Quick Examples
Python SDK
JavaScript SDK
from gravixlayer import GravixLayer
client = GravixLayer()
# Initialize memory with all required parameters
memory = client.memory(
embedding_model="baai/bge-large-en-v1.5",
inference_model="google/gemma-3-12b-it",
index_name="my_memories",
cloud_provider="AWS",
region="us-east-1"
)
# Simple text - stored as-is
result = memory.add("I love pizza", user_id="alice")
print(f"Stored: {result['results'][0]['memory']}")
# Conversation - AI extracts insights
conversation = [
{"role": "user", "content": "I'm looking for a restaurant"},
{"role": "user", "content": "I love spicy Thai food"}
]
conv_result = memory.add(conversation, user_id="alice") # infer=True by default
print(f"AI extracted: {conv_result['results'][0]['memory']}")
# Search finds both
results = memory.search("food", user_id="alice")
print(f"Found {len(results['results'])} memories")
Expected Output:Stored: I love pizza
AI extracted: User prefers spicy Thai cuisine
Found 2 memories
import { GravixLayer, Memory } from 'gravixlayer';
const client = new GravixLayer();
// Initialize memory with all required parameters
const memory = new Memory(
client,
'baai/bge-large-en-v1.5',
'google/gemma-3-12b-it',
'my_memories',
'AWS',
'us-east-1'
);
// Simple text - stored as-is
const result = await memory.add("I love pizza", "alice");
console.log(`Stored: ${result.results[0].memory}`);
// Conversation - AI extracts insights
const conversation = [
{role: "user", content: "I'm looking for a restaurant"},
{role: "user", content: "I love spicy Thai food"}
];
const convResult = await memory.add(conversation, "alice"); // infer=true by default
console.log(`AI extracted: ${convResult.results[0].memory}`);
// Search finds both
const results = await memory.search("food", "alice");
console.log(`Found ${results.results.length} memories`);
Expected Output:Stored: I love pizza
AI extracted: User prefers spicy Thai cuisine
Found 2 memories
Configuration Parameters
All parameters are REQUIRED when initializing memory:
embedding_model - Converts text to searchable format (required)
inference_model - AI that extracts memories from conversations (required)
index_name - Where memories are stored (like folders) (required)
cloud_provider - Where data is hosted (required)
region - Cloud region (required)
delete_protection - Protect index from deletion (optional, default: False)
Multilingual Configuration Example
Python SDK
JavaScript SDK
from gravixlayer import GravixLayer
client = GravixLayer()
# Initialize with multilingual support
memory = client.memory(
embedding_model="microsoft/multilingual-e5-large", # Supports 100+ languages
inference_model="google/gemma-3-12b-it",
index_name="user_preferences",
cloud_provider="AWS",
region="us-east-1",
delete_protection=False # Optional, defaults to False
)
# Add multilingual memories
memory.add("El usuario prefiere pizza", user_id="alice")
memory.add("L'utilisateur aime le café", user_id="alice")
memory.add("用户喜欢寿司", user_id="alice")
# Search works across languages
results = memory.search("food preferences", user_id="alice")
print(f"Found {len(results['results'])} memories")
Expected Output:import { GravixLayer, Memory } from 'gravixlayer';
const client = new GravixLayer();
// Initialize with multilingual support
const memory = new Memory(
client,
'microsoft/multilingual-e5-large', // Supports 100+ languages
'google/gemma-3-12b-it',
'user_preferences',
'AWS',
'us-east-1',
false // deleteProtection - optional, defaults to false
);
// Add multilingual memories
await memory.add("El usuario prefiere pizza", "alice");
await memory.add("L'utilisateur aime le café", "alice");
await memory.add("用户喜欢寿司", "alice");
// Search works across languages
const results = await memory.search("food preferences", "alice");
console.log(`Found ${results.results.length} memories`);
Expected Output:
Available Embedding Models
| Model | Languages |
|---|
baai/bge-large-en-v1.5 | English |
microsoft/multilingual-e5-large | 100+ |
nomic-ai/nomic-embed-text-v1.5 | English |
Available Cloud Providers
| Provider | Regions | Best For |
|---|
AWS | us-east-1, | |
GCP | us-east1, | |
Azure | eastus, | |
Gravix | eu-west-1, | |
Setup
Set your API key first:
Windows (CMD)
Windows (PowerShell)
Linux/macOS
set GRAVIXLAYER_API_KEY=your_api_key_here
$env:GRAVIXLAYER_API_KEY="your_api_key_here"
export GRAVIXLAYER_API_KEY=your_api_key_here
Async Support
GravixLayer supports async/await for both Python and JavaScript applications:
Basic Async Example
Python SDK
JavaScript SDK
import asyncio
from gravixlayer import AsyncGravixLayer
async def main():
client = AsyncGravixLayer()
# Initialize memory with all required parameters
memory = await client.memory(
embedding_model="baai/bge-large-en-v1.5",
inference_model="google/gemma-3-12b-it",
index_name="my_memories",
cloud_provider="AWS",
region="us-east-1"
)
# Add memory
result = await memory.add("I love pizza", user_id="alice")
print(f"Added: {result['results'][0]['memory']}")
# Search memories
results = await memory.search("food", user_id="alice")
print(f"Found: {results['results'][0]['memory']}")
# Run the async function
if __name__ == "__main__":
asyncio.run(main())
Expected Output:Added: I love pizza
Found: I love pizza
import { GravixLayer, Memory } from 'gravixlayer';
async function main() {
const client = new GravixLayer();
// Initialize memory with all required parameters
const memory = new Memory(
client,
'baai/bge-large-en-v1.5',
'google/gemma-3-12b-it',
'my_memories',
'AWS',
'us-east-1'
);
// Add memory
const result = await memory.add("I love pizza", "alice");
console.log(`Added: ${result.results[0].memory}`);
// Search memories
const results = await memory.search("food", "alice");
console.log(`Found: ${results.results[0].memory}`);
}
main().catch(console.error);
Expected Output:Added: I love pizza
Found: I love pizza
Async with Multilingual Support
Python SDK
JavaScript SDK
import asyncio
from gravixlayer import AsyncGravixLayer
async def async_multilingual_example():
client = AsyncGravixLayer()
# Initialize with multilingual configuration
memory = await client.memory(
embedding_model="microsoft/multilingual-e5-large",
inference_model="google/gemma-3-12b-it",
index_name="user_preferences",
cloud_provider="AWS",
region="us-east-1"
)
# Add multilingual memories
await memory.add("El usuario prefiere pizza", user_id="alice")
results = await memory.search("comida", user_id="alice")
print(f"Found {len(results['results'])} memories")
if __name__ == "__main__":
asyncio.run(async_multilingual_example())
Expected Output:import { GravixLayer, Memory } from 'gravixlayer';
async function multilingualExample() {
const client = new GravixLayer();
// Initialize with multilingual configuration
const memory = new Memory(
client,
'microsoft/multilingual-e5-large',
'google/gemma-3-12b-it',
'user_preferences',
'AWS',
'us-east-1'
);
// Add multilingual memories
await memory.add("El usuario prefiere pizza", "alice");
const results = await memory.search("comida", "alice");
console.log(`Found ${results.results.length} memories`);
}
multilingualExample().catch(console.error);
Expected Output:
Async Conversation Processing
Python SDK
JavaScript SDK
import asyncio
from gravixlayer import AsyncGravixLayer
async def process_conversation():
client = AsyncGravixLayer()
# Initialize memory
memory = await client.memory(
embedding_model="baai/bge-large-en-v1.5",
inference_model="google/gemma-3-12b-it",
index_name="conversations",
cloud_provider="AWS",
region="us-east-1"
)
conversation = [
{"role": "user", "content": "I love sci-fi movies"},
{"role": "assistant", "content": "Great! I'll remember that."}
]
# AI extracts memories from conversation
result = await memory.add(conversation, user_id="alice", infer=True)
print(f"Extracted {len(result['results'])} memories")
if __name__ == "__main__":
asyncio.run(process_conversation())
Expected Output:import { GravixLayer, Memory } from 'gravixlayer';
async function processConversation() {
const client = new GravixLayer();
// Initialize memory
const memory = new Memory(
client,
'baai/bge-large-en-v1.5',
'google/gemma-3-12b-it',
'conversations',
'AWS',
'us-east-1'
);
const conversation = [
{role: "user", content: "I love sci-fi movies"},
{role: "assistant", content: "Great! I'll remember that."}
];
// AI extracts memories from conversation
const result = await memory.add(conversation, "alice", { infer: true });
console.log(`Extracted ${result.results.length} memories`);
}
processConversation().catch(console.error);
Expected Output: