Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -407,6 +407,7 @@
"guides/embedders/openai",
"guides/langchain",
"guides/embedders/huggingface",
"guides/embedders/bedrock",
"guides/embedders/cloudflare",
"guides/embedders/cohere",
"guides/embedders/mistral",
Expand Down
166 changes: 166 additions & 0 deletions guides/embedders/bedrock.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
---
title: Semantic Search with AWS Bedrock Embedding Models
description: This guide will walk you through the process of setting up Meilisearch with AWS Bedrock embedding models to enable semantic search capabilities.
---

## Introduction

This guide will walk you through the process of setting up Meilisearch with AWS Bedrock embedding models to enable semantic search capabilities. By leveraging Meilisearch's AI features and AWS Bedrock's embedding models, you can enhance your search experience and retrieve more relevant results using high-quality embedding models from Amazon and third-party providers available on Bedrock.

## Requirements

To follow this guide, you'll need:

- A [Meilisearch Cloud](https://www.meilisearch.com/cloud) project running version >=1.11 or a self-hosted Meilisearch instance
- An AWS account with Bedrock access and API key for embedding generation. You can sign up for an AWS account at [AWS](https://aws.amazon.com/).
- Access to embedding models available on AWS Bedrock in your AWS account
- No backend required.

## Setting up Meilisearch

To set up an embedder in Meilisearch, you need to configure it in your settings. You can refer to the [Meilisearch documentation](/reference/api/settings) for more details on updating the embedder settings.

AWS Bedrock provides access to several high-quality embedding models:

- `amazon.titan-embed-text-v1`: 1,536 dimensions (Amazon Titan Text Embeddings G1)
- `amazon.titan-embed-text-v2:0`: 1,024 dimensions (Amazon Titan Text Embeddings V2)
- `amazon.nova-2-multimodal-embeddings-v1:0`: 256/384/1024/3072 dimensions (Amazon Nova 2 Multimodal Embeddings - supports text, images, video, and audio)
- `cohere.embed-english-v3`: 1,024 dimensions (Cohere English embeddings)
- `cohere.embed-multilingual-v3`: 1,024 dimensions (Cohere Multilingual embeddings)

### Getting a Bedrock API key

Before configuring the embedder, you'll need to obtain a Bedrock API key:

1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/)
2. Navigate to the [Amazon Bedrock console](https://console.aws.amazon.com/bedrock/)
3. In the left navigation, choose **API keys**
4. Choose **Generate API key**
5. Set an expiration period (recommended: 30 days for testing)
6. Copy the generated API key

**Important**: Make sure to generate your API key in the same AWS region where you plan to use the Bedrock embedding models, as API keys are region-specific.



Here's an example of embedder settings for AWS Bedrock embedding models using Amazon Titan:

```json
{
"bedrock-titan": {
"source": "rest",
"url": "https://bedrock-runtime.us-west-2.amazonaws.com/model/amazon.titan-embed-text-v2:0/invoke",
"apiKey": "<Your Bedrock API Key>",
"dimensions": 1024,
"documentTemplate": "<Custom template (Optional, but recommended)>",
"request": {
"inputText": "{{text}}"
},
"response": {
"embedding": "{{embedding}}"
}
}
}

In this configuration:

- `source`: Specifies the source of the embedder, which is set to "rest" for using Bedrock's REST API.
- `url`: The Bedrock Runtime API endpoint for the specific model and region.
- `apiKey`: Replace `<Your Bedrock API Key>` with your actual Bedrock API key.
- `dimensions`: Specifies the dimensions of the embeddings. Set to 1024 for Titan V2 and Cohere models, or 1536 for Titan V1.
- `documentTemplate`: Optionally, you can provide a [custom template](/learn/ai_powered_search/getting_started_with_ai_search) for generating embeddings from your documents.
- `request`: The request format expected by the Bedrock model.
- `response`: The response format returned by the Bedrock model.

For different Bedrock embedding models, you'll need to adjust the URL and request/response formats:

**Cohere models** use a different format:
```json
{
"cohere-english": {
"source": "rest",
"url": "https://bedrock-runtime.us-west-2.amazonaws.com/model/cohere.embed-english-v3/invoke",
"apiKey": "<Your Bedrock API Key>",
"dimensions": 1024,
"request": {
"texts": ["{{text}}"],
"input_type": "search_document"
},
"response": {
"embeddings": ["{{embedding}}"]
}
}
}
```

**Amazon Nova 2 Multimodal Embeddings** uses a different request format:
```json
{
"nova-multimodal": {
"source": "rest",
"url": "https://bedrock-runtime.us-west-2.amazonaws.com/model/amazon.nova-2-multimodal-embeddings-v1:0/invoke",
"apiKey": "<Your Bedrock API Key>",
"dimensions": 1024,
"request": {
"schemaVersion": "nova-multimodal-embed-v1",
"taskType": "SINGLE_EMBEDDING",
"singleEmbeddingParams": {
"embeddingPurpose": "GENERIC_INDEX",
"embeddingDimension": 1024,
"text": {
"truncationMode": "END",
"value": "{{text}}"
}
}
},
"response": {
"embeddings": [{"embedding": "{{embedding}}"}]
}
}
}
```

Once you've configured the embedder settings, Meilisearch will automatically generate embeddings for your documents and store them in the vector store.



It's recommended to monitor the tasks queue to ensure everything is running smoothly. You can access the tasks queue using the Cloud UI or the [Meilisearch API](/reference/api/tasks).

## Testing semantic search

With the embedder set up, you can now perform semantic searches using Meilisearch. When you send a search query, Meilisearch will generate an embedding for the query using the configured Bedrock embedding model and then use it to find the most semantically similar documents in the vector store.

To perform a semantic search, you simply need to make a normal search request but include the hybrid parameter:

```json
{
"q": "<Query made by the user>",
"hybrid": {
"semanticRatio": 1,
"embedder": "bedrock-titan"
}
}
```

In this request:

- `q`: Represents the user's search query.
- `hybrid`: Specifies the configuration for the hybrid search.
- `semanticRatio`: Allows you to control the balance between semantic search and traditional search. A value of 1 indicates pure semantic search, while a value of 0 represents full-text search. You can adjust this parameter to achieve a hybrid search experience.
- `embedder`: The name of the embedder used for generating embeddings. Make sure to use the same name as specified in the embedder configuration, which in this case is "bedrock-titan".

You can use the Meilisearch API or client libraries to perform searches and retrieve the relevant documents based on semantic similarity.

## Important considerations

**Setup order**: Configure embedders before indexing documents. Embeddings are only generated when documents are indexed with embedders already configured. If you indexed documents before configuring embedders, you must re-index them.

**Regional endpoints**: Bedrock is available in multiple AWS regions. Make sure to use the correct endpoint URL for your region (e.g., `us-east-1`, `us-west-2`, `eu-west-1`). Your API key must be generated in the same region as the endpoint you're using.

**Model availability**: Embedding models are generally available on Bedrock without requiring special access requests.

## Conclusion

By following this guide, you should now have Meilisearch set up with AWS Bedrock embedding models, enabling you to leverage semantic search capabilities in your application. Meilisearch's auto-batching and efficient handling of embeddings make it a powerful choice for integrating semantic search into your project.

To explore further configuration options for embedders, consult the [detailed documentation about the embedder setting possibilities](/reference/api/settings).