AWS, Cloud Computing

3 Mins Read

Simplify Text using AWS Mphasis DeepInsights Text Summarizer – Part 2

Voiced by Amazon Polly


So far, we have seen in Part 1 what is Mphasis DeepInsights Text Summarizer and its Applications in a real-world scenario. Now we will implement its algorithm with the steps below:  

To run the Text Summarizer Algorithm, we need to access the following AWS Services: 

  • Access to AWS SageMaker and the model package.
  • An S3 bucket to specify input/output.
  • A role for AWS SageMaker to access input/output from S3.

Implementation of Text Summarizer Algorithm

Usage Information

Usage Methodology for the algorithm:

  • Input should have a ‘.txt’ extension with ‘utf-8’ encoding.
  • Note- Model performance will interrupt if the file ‘.txt’ is not ‘utf-8’ encoded.
  • To ensure that the input data is ‘UTF-8’ encoded, please ‘Save As’ using Encoding as ‘UTF-8’
  • The input can have a maximum of 512 words (which is the Sagemaker limit)
  • Input should contain a minimum of 3 sentences (Model restriction)
  • Supported content types: text/plain.

Helping organizations transform their IT infrastructure with top-notch Cloud Computing services

  • Cloud Migration
  • Devops
  • AIML & IoT
Know More

Invoking Endpoint


Set up the Environment

  • Update Boto Client and AWS SDK
  • Initializing API in AWS Sagemaker to update Boto Client and AWS SDK, the new cells set it up to invoke the launched API.

Private Beta Setup

The private beta is limited to the us-east-2 region. The client we are setting up will only be hard-coded for the us-east-2 endpoint.

Sample input data




Create the session

The session remembers our connection parameters to SageMaker. We will use it to perform all of our SageMaker operations.

Create Model

Now we use the Model Package to produce a model

Input File

Now we pull a sample input train for testing the model.

Batch Transform Job

Now let’s use the model erected to run a batch conclusion job and corroborate it works.

Output from Batch Transform

Note The following package is installed on the original system boto3



Output file loaded from the bucket



Invoking through Endpoint

This is another way of planting the model that provides results as the real-time conclusion. Then’s a sample endpoint for reference.




Therefore, we’ve seen how we’ve got useful information from the long textbook using AWS Text Summarizer. The intention is to produce a coherent and fluent summary having only the main points outlined in the document. Furthermore, applying textbook summarization reduces reading time, accelerates the process of probing for information, and increases the quantum of information that can fit in an area. The introductory idea is to count the frequency of the words occurring in the textbook and assume that the loftiest occurring words are important given the occurrence threshold and grounded upon it, epitomizing the textbook.

Get your new hires billable within 1-60 days. Experience our Capability Development Framework today.

  • Cloud Training
  • Customized Training
  • Experiential Learning
Read More

About CloudThat

CloudThat is also the official AWS (Amazon Web Services) Advanced Consulting Partner and Training partner and Microsoft gold partner, helping people develop knowledge of the cloud and help their businesses aim for higher goals using best in industry cloud computing practices and expertise. We are on a mission to build a robust cloud computing ecosystem by disseminating knowledge on technological intricacies within the cloud space. Our blogs, webinars, case studies, and white papers enable all the stakeholders in the cloud computing sphere.

Drop a query if you have any questions regarding Amazon Text Summarizer and I will get back to you quickly.

To get started, go through our Consultancy page and Managed Services Package that is CloudThat’s offerings.


1. Which algorithm is used in text summarization?

ANS: – Text summarization using the frequency system In this system, we find the frequency of all the words in our textbook data and store the textbook data and its frequency in a wordbook. After that, we tokenize our textbook data. The rulings which contain further high-frequency words will be kept in our final summary data.


2. How is automatic summarization of text helpful?

ANS: – Automatic textbook summarization is an instigative exploration area with several operations on the assiduity. By condensing large amounts of information into short, summarization can prop numerous downstream operations, like creating news abridgments, report generation, news summarization, and caption generation. Summarization is the task of compressing text into a shorter version, reducing the size of the source text while preserving important elements of the information and the meaning of the content. Since manual text summarization is time-consuming and often tedious, automated tasks are gaining popularity, thus providing a strong impetus for academic research. Text summarization has important uses in various NLP-related tasks, such as B, text classification, question answering, legal text summarization, news summarization, and headline news production. Furthermore, these systems can be integrated with creating summaries as an intermediate step, which helps reduce document length. In the era of big data, the amount of text data from various sources is exploding. This text is an invaluable source of information that must be effectively summarized to be useful. The increase in document availability calls for extensive research in the field of NLP for automatic text summarization. Automatic text summarization is creating concise and fluent summaries without human intervention while preserving the meaning of the original text document. This is very challenging because, as humans, when we summarize a text, we usually read it in its entirety to deepen our understanding and then write a key-point summary. Since computers lack human knowledge and language skills, automatic text summarization becomes difficult and non-trivial. Various machine learning-based models have been proposed for this task. Most of these methods model this problem as a classification problem, returning whether a sentence should be included in the summary. Other methods use topic information, Latent Semantic Analysis (LSA), sequence-to-sequence models, reinforcement learning, and adversarial procedures.

WRITTEN BY Neetika Gupta

Neetika Gupta works as a Senior Research Associate in CloudThat has the experience to deploy multiple Data Science Projects into multiple cloud frameworks. She has deployed end-to-end AI applications for Business Requirements on Cloud frameworks like AWS, AZURE, and GCP and Deployed Scalable applications using CI/CD Pipelines.



    Click to Comment