Voiced by Amazon Polly |
Overview
Retrieval-Augmented Generation (RAG) workflows empower AI systems to provide highly accurate and contextual responses by retrieving relevant data before generating an answer. In this guide, we explore the setup and integration of PostgreSQL as a Vector Database (VectorDB) with Amazon Bedrock Knowledge Base to enable scalable and efficient RAG workflows.
Pioneers in Cloud Consulting & Migration Services
- Reduced infrastructural costs
- Accelerated application deployment
Introduction
PostgreSQL as VectorDB Amazon Aurora PostgreSQL, with its scalability and feature set, serves as an excellent VectorDB solution. Leveraging the pgvector extension supports vector storage, indexing, and searches, seamlessly integrating with Amazon Bedrock’s Knowledge Base.
Use Case: This integration enhances foundational models’ capabilities, enabling them to generate more accurate and context-rich responses by retrieving relevant data stored in PostgreSQL.
Prerequisites
- Aurora PostgreSQL Versions: Ensure you use PostgreSQL version 12.16 or higher.
- pgvector Extension: Version 0.5.0+ is required for vector searches.
- AWS Secrets Manager: Use AWS Secrets Manager to store database credentials securely.
- Amazon Bedrock Access: Enable Amazon Bedrock to connect with the Knowledge Base.
PostgreSQL Setup
- Create Amazon Aurora PostgreSQL Cluster
- Use the AWS Management Console to create a PostgreSQL cluster.
- Enable the Amazon RDS Data API and make a note of the DB Cluster ARN.
- Install and Verify pgvector Run the following SQL commands to install and verify the pgvector extension:
1 2 |
CREATE EXTENSION IF NOT EXISTS vector; SELECT extversion FROM pg_extension WHERE extname='vector'; |
- Configure Schema and Roles Set up a dedicated schema and assign roles:
1 2 3 |
CREATE SCHEMA bedrock_integration; CREATE ROLE bedrock_user WITH PASSWORD 'your_password' LOGIN; GRANT ALL ON SCHEMA bedrock_integration TO bedrock_user; |
Vector Table Setup
- Table Definition: Create a table to store vector embeddings, metadata, and text data:
1 2 3 4 5 6 |
CREATE TABLE bedrock_integration.bedrock_kb ( id UUID PRIMARY KEY, embedding vector(1024), chunks text, metadata json ); |
- Embedding Dimension: Ensure the dimension matches the model, e.g., 1024 for Amazon Titan v2.
- Metadata: Store additional contextual information in JSON format.
- Vector Search Index Optimize vector search using the HNSW index:
1 |
CREATE INDEX ON bedrock_integration.bedrock_kb USING hnsw (embedding vector_cosine_ops); |
- For pgvector version 0.6.0+, enable parallel indexing:
1 |
CREATE INDEX ON bedrock_integration.bedrock_kb USING hnsw (embedding vector_cosine_ops) WITH (ef_construction=256); |
Data and Metadata Preparation for Amazon Bedrock
Data Ingestion Steps
- Chunk your text data into smaller files and associate each chunk with metadata.
- Example Metadata:
1 2 3 4 5 6 7 8 |
{ "metadataAttributes": { "Name": "Sample Recipe", "TotalTimeInMinutes": "25", "CholesterolContent": "0", "SugarContent": "5" } } |
Upload Data to Amazon S3 Use Python and Boto3 to upload your data:
1 2 3 4 5 6 7 8 9 |
import os import boto3 s3_client = boto3.client('s3') def upload_directory(path, bucket_name): for root, dirs, files in os.walk(path): for file in files: s3_client.upload_file(os.path.join(root, file), bucket_name, file) |
Amazon Bedrock Integration
Knowledge Base Setup Configure Amazon Bedrock to use the PostgreSQL vector table:
- Provide the following:
- Aurora DB Cluster ARN
- Secrets Manager ARN
- Database and Table Names
- Index Field Mapping
Field Mapping Details
- Vector Field Name: The column for storing embeddings.
- Text Field Name: The column for storing raw text chunks.
- Metadata Field Name: The column for storing metadata.
- Primary Key: Specify the primary key column.
Retrieval-Augmented Generation (RAG)
Metadata Filtering Improve retrieval accuracy by applying metadata constraints:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
def retrieve(query, kb_id, number_of_results=5): return bedrock_agent_client.retrieve( retrievalQuery={'text': query}, knowledgeBaseId=kb_id, retrievalConfiguration={ 'vectorSearchConfiguration': { 'numberOfResults': number_of_results, 'filter': { 'andAll': [ {"lessThan": {"key": "CholesterolContent", "value": 10}}, {"lessThan": {"key": "TotalTimeInMinutes", "value": 30}} ] } } } ) |
Retrieve and Generate Responses Combine retrieval with generation for context-rich answers:
prompt = “””
Human: You have great knowledge about food, so provide answers to questions by using facts.
If you don’t know the answer, just say that you don’t know; don’t try to make up an answer.
Assistant:”””
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
def retrieve_and_generate(query, kb_id, model_id, number_of_results=10): return bedrock_agent_client.retrieve_and_generate( input={'text': query}, retrieveAndGenerateConfiguration={ 'knowledgeBaseConfiguration': { 'generationConfiguration': { 'promptTemplate': { 'textPromptTemplate': f"{prompt} $search_results$" } }, 'knowledgeBaseId': kb_id, 'retrievalConfiguration': { 'vectorSearchConfiguration': { 'numberOfResults': number_of_results, 'filter': { 'andAll': [ {"lessThan": {"key": "CholesterolContent", "value": 10}}, {"lessThan": {"key": "TotalTimeInMinutes", "value": 30}} ] } } } }, 'modelArn': model_id } ) |
Benefits of Metadata Filtering
- Accuracy: Ensures retrieved results meet specific constraints.
- Efficiency: Reduces token costs by focusing on relevant data.
- Applications: Useful for chatbots, search engines, and recommendation systems.
Conclusion
This setup is ideal for use cases like AI-driven chatbots, personalized recommendations, and advanced search engines.
Drop a query if you have any questions regarding PostgreSQL and we will get back to you quickly.
Empowering organizations to become ‘data driven’ enterprises with our Cloud experts.
- Reduced infrastructure costs
- Timely data-driven decisions
About CloudThat
CloudThat is an award-winning company and the first in India to offer cloud training and consulting services worldwide. As a Microsoft Solutions Partner, AWS Advanced Tier Training Partner, and Google Cloud Platform Partner, CloudThat has empowered over 850,000 professionals through 600+ cloud certifications winning global recognition for its training excellence including 20 MCT Trainers in Microsoft’s Global Top 100 and an impressive 12 awards in the last 8 years. CloudThat specializes in Cloud Migration, Data Platforms, DevOps, IoT, and cutting-edge technologies like Gen AI & AI/ML. It has delivered over 500 consulting projects for 250+ organizations in 30+ countries as it continues to empower professionals and enterprises to thrive in the digital-first world.
FAQs
1. Why choose Aurora PostgreSQL with pgvector over other VectorDBs?
ANS: – Aurora PostgreSQL combines familiarity with SQL with vector embedding capabilities via pgvector. It’s cost-effective, highly available, and integrates seamlessly with Amazon Bedrock.
2. How does metadata filtering improve RAG workflows?
ANS: – By applying constraints (e.g., TotalTimeInMinutes < 30), metadata filtering ensures retrieved results are contextually relevant, optimizing foundational model performance and reducing irrelevant token usage.

WRITTEN BY Shantanu Singh
Shantanu Singh is a Research Associate at CloudThat with expertise in Data Analytics and Generative AI applications. Driven by a passion for technology, he has chosen data science as his career path and is committed to continuous learning. Shantanu enjoys exploring emerging technologies to enhance both his technical knowledge and interpersonal skills. His dedication to work, eagerness to embrace new advancements, and love for innovation make him a valuable asset to any team.
Comments