Voiced by Amazon Polly |
Overview
AWS re:Invent, an annual global conference by Amazon Web Services, provides a central gathering point for the cloud computing community. The event includes keynotes, sessions, workshops, and networking opportunities, serving as a platform for showcasing new AWS services and updates.
Day 2 of AWS re:Invent 2023 was marked by excitement, featuring insightful sessions and notable releases. As we move to Day 3, there is heightened anticipation for the unveiling of numerous new features and services. Let us check into the events and innovations of Day 3 at AWS.
Introduction
Dr. Swami Sivasubramanian, Vice President of Databases, Analytics, and Machine Learning at AWS, takes center stage in a dynamic session that unveils the advancements in databases, analytics, Generative AI, and machine learning to boost builder productivity. With a keen focus on real-world applications, customer speakers share compelling instances of harnessing data and generative AI to elevate business operations and craft innovative customer experiences.
Dr. Swami underscores the evolving interplay between humans, data, and AI, highlighting generative AI as a transformative force that enhances productivity and fuels creativity. The session goes deep into the strategic use of enterprise data and human intelligence, illustrating how these elements can be synergized to create distinctive generative AI applications, ultimately accelerating productivity across various organizational domains.
Pioneers in Cloud Consulting & Migration Services
- Reduced infrastructural costs
- Accelerated application deployment
New Releases!!!
- Amazon Bedrock Anthropic – With Claude 2.1 now available in Amazon Bedrock, you can easily develop enterprise-ready Generative Artificial Intelligence (AI) applications using Anthropic’s more honest and reliable AI systems. Access the Claude 2.1 model from Anthropic within the Amazon Bedrock console.
- Meta Llama 2 on Amazon Bedrock – Amazon Bedrock is now the first public cloud service to provide a fully managed API for Meta’s Llama 2 Chat 13B large language model (LLM). This milestone enables organizations of all sizes to seamlessly access Llama 2 Chat models on Amazon Bedrock without the hassle of managing the underlying infrastructure, marking a significant leap in accessibility.
- Amazon Titan Multimodal Embeddings – Amazon Titan Multimodal Embeddings enhances the creation of precise and contextually relevant multimodal search and recommendation experiences. This capability allows users to input text, image, or a combination of both, and the model transforms them into embeddings—capturing semantic meaning and relationships within the data—supporting inputs up to 128 tokens.
- Amazon Titan Text Lite – Titan Text Lite has a maximum context length of 4,096 tokens and is a price-performant version ideal for English-language tasks. The model is highly customizable and can be fine-tuned for tasks such as article summarization and copywriting.
- Amazon Titan Text Express – Titan Text Express has a maximum context length of 8,192 tokens and is ideal for a wide range of tasks, such as open-ended text generation and conversational chat. Support within Retrieval Augmented Generation (RAG) workflows.
- Amazon Titan Image Generator – Amazon Titan Image Generator accelerates the efficient creation and refinement of images through English natural language prompts. This model benefits companies in advertising, e-commerce, media, and entertainment, enabling the cost-effective generation of studio-quality, realistic images at scale. Its capability to comprehend complex prompts involving multiple objects ensures the production of relevant images. Trained on diverse, high-quality data, the model delivers accurate outputs, including realistic images with inclusive attributes and minimal distortions.
- Amazon SageMaker HyperPod – Amazon SageMaker HyperPod accelerates foundation model (FM) training by offering dedicated infrastructure for efficient distributed training at scale. With this tool, you can train FMs for extended periods, as Amazon SageMaker continuously monitors cluster health. It ensures automated node and job resiliency by replacing faulty nodes and resuming training from checkpoints.
- Vector engine for OpenSearch Serverless – Vector Engine for Amazon OpenSearch Serverless simplifies the creation of modern machine learning (ML) augmented search experiences and generative artificial intelligence (generative AI) applications. This capability offers a straightforward, scalable, and high-performing similarity search without the need to manage the underlying vector database infrastructure.
- Vector Search Amazon DocumentDB – Vector search for Amazon DocumentDB (with MongoDB compatibility), a new built-in capability that lets you store, index, and search millions of vectors with millisecond response times within your document database.
- Amazon Neptune Analytics – Amazon Neptune Analytics is a swift analytics database engine designed by data scientists and application developers for the accelerated analysis of substantial graph data. This tool enables rapid dataset loading from Amazon Neptune or Amazon S3 data lakes, facilitates near real-time analysis tasks, and allows for efficient termination of the graph when needed.
- Amazon OpenSearch Service Zero-ETL integration with Amazon S3 – Amazon OpenSearch Service zero-ETL integration with Amazon S3 is a new way to query operational logs in Amazon S3 and S3-based data lakes without switching between services. You can now analyze infrequently queried data in cloud object stores and simultaneously use the operational analytics and visualization capabilities of OpenSearch Service.
- AWS Clean Rooms ML – AWS Clean Rooms ML (preview) is a new capability of AWS Clean Rooms that helps you and your partners apply machine learning (ML) models on your collective data without copying or sharing raw data. With this new capability, you can generate predictive insights using ML models while continuing to protect your sensitive data.
- Amazon Q generative SQL in Amazon Redshift – Amazon Q generative SQL in Amazon Redshift Query Editor generates SQL recommendations from natural language prompts. This helps you to be more productive in extracting insights from your data.
- Model Evaluation on Amazon Bedrock – Amazon Bedrock provides the flexibility of automatic and human evaluations. For predefined metrics like accuracy and robustness, automatic evaluation is available. For subjective or custom metrics such as friendliness and alignment to brand voice, easily set up human evaluation workflows. These evaluation tools are crucial for generative artificial intelligence (AI) applications at all development stages.
- PartyRock – PartyRock is an engaging and user-friendly generative AI app-building playground from AWS. You can create diverse apps to experiment with generative AI with just a few steps. Build an app to generate jokes, curate personalized playlists, recommend recipes from pantry ingredients, optimize party budgets, or even craft an AI storyteller for your fantasy role-playing campaign.
Customer Speakers
Nhung Ho, Vice President of AI, Intuit – Nhung leads AI teams for QuickBooks, TurboTax, and Customer Success at Intuit, overseeing applied science units developing AI-driven products for small businesses and consumers. With a Ph.D. in astrophysics from Yale, she has played a pivotal role in transforming AI from a niche field to a central element of Intuit’s strategy, focusing on automation, natural language systems, and enhancing customer experiences, notably in call center demand forecasting and accounting automation.
Aravind Srinivas, Co-Founder and CEO, Perplexity – Aravind Srinivas, a tech-savvy entrepreneur and visionary leader, holds a Ph.D. in computer science from Berkeley, specializing in artificial intelligence and machine learning. His relentless pursuit of innovation has significantly impacted these fields. As a thought leader beyond his role at Perplexity, Aravind frequently shares his insights at conferences and industry events, earning recognition in leading publications.
Rob Francis, SVP & Chief Technology Officer, Booking.com – As SVP and CTO, Rob Francis spearheads Booking.com’s global technology strategy, ensuring seamless end-to-end travel experiences. Leveraging his expertise from high-performing organizations in e-commerce and consumer technology, Rob focuses on legacy modernization, engineering, software development, and tech infrastructure evolution to advance Booking.com’s vision of empowering travelers to explore the world.
Conclusion
In this dynamic Keynote Session led by Dr. Swami Sivasubramanian, attendees were immersed in the forefront of technological innovation. Dr. Swami unveiled the latest breakthroughs in databases, analytics, generative AI, and machine learning, showcasing their potential to elevate builder productivity. The event provided a unique platform for customer speakers to spotlight real-world success stories, demonstrating the strategic fusion of data and generative AI for operational enhancement and innovative customer experiences. Dr. Swami’s emphasis on the evolving interplay between humans, data, and AI underscored generative AI as a transformative force, boosting productivity and enhancing creativity. The session’s deep dive into leveraging enterprise data and human intelligence showcased the potential for building distinctive generative AI applications, promising accelerated productivity across diverse organizational domains.
Stay Tuned for the updates on CloudThat page.
Making IT Networks Enterprise-ready – Cloud Management Services
- Accelerated cloud migration
- End-to-end view of the cloud environment
About CloudThat
CloudThat is a leading provider of Cloud Training and Consulting services with a global presence in India, the USA, Asia, Europe, and Africa. Specializing in AWS, Microsoft Azure, GCP, VMware, Databricks, and more, the company serves mid-market and enterprise clients, offering comprehensive expertise in Cloud Migration, Data Platforms, DevOps, IoT, AI/ML, and more.
CloudThat is recognized as a top-tier partner with AWS and Microsoft, including the prestigious ‘Think Big’ partner award from AWS and the Microsoft Superstars FY 2023 award in Asia & India. Having trained 650k+ professionals in 500+ cloud certifications and completed 300+ consulting projects globally, CloudThat is an official AWS Advanced Consulting Partner, AWS Training Partner, AWS Migration Partner, AWS Data and Analytics Partner, AWS DevOps Competency Partner, Amazon QuickSight Service Delivery Partner, Amazon EKS Service Delivery Partner, Microsoft Gold Partner, AWS Microsoft Workload Partners, Amazon EC2 Service Delivery Partner, and many more.
To get started, go through our Consultancy page and Managed Services Package, CloudThat’s offerings.
WRITTEN BY Anusha R
Anusha R is a Research Associate at CloudThat. She is interested in learning advanced technologies and gaining insights into new and upcoming cloud services, and she is continuously seeking to expand her expertise in the field. Anusha is passionate about writing tech blogs leveraging her knowledge to share valuable insights with the community. In her free time, she enjoys learning new languages, further broadening her skill set, and finds relaxation in exploring her love for music and new genres.
Click to Comment