Basic Usage
- Python SDK
- JavaScript SDK
Copy
from gravixlayer import GravixLayer
client = GravixLayer()
memory = client.memory
# Add simple text
result = memory.add("I love pizza", user_id="alice")
print(f"Added memory: {result['results'][0]['memory']}")
print(f"Memory ID: {result['results'][0]['id']}")
# Add with metadata
result = memory.add("User prefers dark mode", user_id="alice", metadata={"type": "preference"})
print(f"Added preference: {result['results'][0]['memory']}")
print(f"Metadata: {result['results'][0]['metadata']}")
# Get all memories to verify
all_memories = memory.get_all(user_id="alice")
print(f"\nTotal memories for alice: {len(all_memories['results'])}")
for i, mem in enumerate(all_memories['results'], 1):
print(f"{i}. {mem['memory']}")
if mem.get('metadata'):
print(f" Metadata: {mem['metadata']}")
Copy
Added memory: I love pizza
Memory ID: b355d0d2-3eaa-4bc6-a61b-48ee615279bf
Added preference: User prefers dark mode
Metadata: {'type': 'preference'}
Total memories for alice: 2
1. I love pizza
2. User prefers dark mode
Metadata: {'type': 'preference'}
Add Conversations
Store entire conversations and let AI extract key memories:- Python SDK
- JavaScript SDK
Copy
from gravixlayer import GravixLayer
client = GravixLayer()
memory = client.memory
# Store a conversation with AI inference
conversation = [
{"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
{"role": "assistant", "content": "How about thriller movies? They can be quite engaging."},
{"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."},
{"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
]
result = memory.add(conversation, user_id="alice", infer=True, metadata={"type": "conversation"})
print(f"AI extracted {len(result['results'])} memories from conversation:")
for i, extracted_memory in enumerate(result['results'], 1):
print(f"{i}. {extracted_memory['memory']}")
print(f" ID: {extracted_memory['id']}")
if extracted_memory.get('metadata'):
print(f" Metadata: {extracted_memory['metadata']}")
# Verify by searching for movie preferences
search_results = memory.search("movie preferences", user_id="alice")
print(f"\nFound {len(search_results['results'])} movie-related memories:")
for result in search_results['results']:
print(f"- {result['memory']}")
Copy
AI extracted 2 memories from conversation:
1. User prefers sci-fi movies
ID: c455d0d2-3eaa-4bc6-a61b-48ee615279bf
Metadata: {'type': 'conversation'}
2. User dislikes thriller movies
ID: d755d0d2-3eaa-4bc6-a61b-48ee615279bf
Metadata: {'type': 'conversation'}
Found 2 movie-related memories:
- User prefers sci-fi movies
- User dislikes thriller movies
Custom Configuration (Optional)
The memory system uses smart defaults, but you can customize it for specific needs:Configuration Parameters
What each setting does:embedding_model- How text gets converted to searchable vectorsinference_model- AI model that extracts memories from conversationsindex_name- Where memories are stored (like folders)cloud_provider- Where your data is hosted
- Python SDK
- JavaScript SDK
Copy
from gravixlayer import GravixLayer
client = GravixLayer()
memory = client.memory
# Configure for multilingual app with organized storage
memory.switch_configuration(
embedding_model="microsoft/multilingual-e5-large", # Supports 100+ languages
inference_model="qwen/qwen-2.5-vl-7b-instruct", # Better context understanding
index_name="user_preferences", # Organized storage
cloud_provider="GCP", # Google Cloud hosting
region="us-east1" # Specific region
)
# Now works with any language
result1 = memory.add("El usuario prefiere pizza", user_id="alice")
result2 = memory.add("L'utilisateur aime le café", user_id="alice")
result3 = memory.add("用户喜欢寿司", user_id="alice")
print("Added multilingual memories:")
print(f"Spanish: {result1['results'][0]['memory']}")
print(f"French: {result2['results'][0]['memory']}")
print(f"Chinese: {result3['results'][0]['memory']}")
# Check current configuration
config = memory.get_current_configuration()
print(f"\nCurrent configuration:")
print(f"Embedding model: {config['embedding_model']}")
print(f"Inference model: {config['inference_model']}")
print(f"Index name: {config['index_name']}")
print(f"Cloud provider: {config['cloud_provider']}")
print(f"Region: {config['region']}")
# Search works across all languages
search_results = memory.search("food preferences", user_id="alice")
print(f"\nFound {len(search_results['results'])} food-related memories:")
for result in search_results['results']:
print(f"- {result['memory']}")
Copy
🔄 Switched embedding model to: microsoft/multilingual-e5-large
🔄 Switched inference model to: qwen/qwen-2.5-vl-7b-instruct
🔄 Switched to database: user_preferences
🔄 Switched cloud provider to: GCP
🔄 Switched region to: us-east1
✅ Configuration updated successfully
Added multilingual memories:
Spanish: El usuario prefiere pizza
French: L'utilisateur aime le café
Chinese: 用户喜欢寿司
Current configuration:
Embedding model: microsoft/multilingual-e5-large
Inference model: qwen/qwen-2.5-vl-7b-instruct
Index name: user_preferences
Cloud provider: GCP
Region: us-east1
Found 3 food-related memories:
- El usuario prefiere pizza
- L'utilisateur aime le café
- 用户喜欢寿司

