The context field in an Ollama API response (specifically from the /api/chat endpoint) is an array of numbers that represents an encoding of the conversation history. This array is crucial for maintaining conversational memory in subsequent requests. When you send a new message, you include the context from the previous response to allow the model to “remember” the ongoing conversation. It is not meant for direct human interpretation or for use as a vector embedding in a database like Qdrant.
Loading Ollama Embeddings to Qdrant Vector Collection
To load information from Ollama into Qdrant, you would typically use the embeddings generated by Ollama, not the context array. Generate Embeddings with Ollama.
You can use the /api/embeddings endpoint in Ollama to generate vector embeddings for your text data. This endpoint takes a model and a prompt (the text you want to embed) and returns a embedding array, which is a list of floating-point numbers representing the vector.
{<br> "model": "your_embedding_model",<br> "prompt": "This is the text I want to embed."<br> }
The response will contain the embedding array:
{<br> "embedding": [0.123, 0.456, ..., 0.789]<br> }
Create a Qdrant Collection.
Before adding data, create a collection in Qdrant with the appropriate vector size and distance metric (e.g., Cosine, Dot, Euclidean). The size should match the dimensionality of the embeddings generated by your Ollama model.
Python
from qdrant_client import QdrantClient, models<br><br> client = QdrantClient(host="localhost", port=6333)<br><br> client.recreate_collection(<br> collection_name="my_ollama_embeddings",<br> vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), <em># Adjust size based on your model</em><br> )
Insert Embeddings into Qdrant.
Once you have the embeddings from Ollama, you can insert them into your Qdrant collection as points. Each point will have a unique ID, the generated vector, and optionally, a payload dictionary for storing associated metadata (like the original text).
Python
<em># Assuming 'embedding_vector' is the array from Ollama and 'original_text' is the source text</em><br> client.upsert(<br> collection_name="my_ollama_embeddings",<br> points=[<br> models.PointStruct(<br> id=1, <em># Unique ID for the point</em><br> vector=embedding_vector,<br> payload={"text": original_text}<br> )<br> ]<br> )
