Datamatics Blog on technologies and innovative solutions

Create a highly accurate business summary using Deep Learning algorithms

Written by Dr. Subhrajyoti Mandal | Nov 13, 2019 3:30:00 PM

Estimated reading time: 3 mins

In complex business functions such as organizational audits, having quick data summaries at your disposal for analysis is a relief. It is explicitly so, when huge heaps of free text data are looming large and need to be analyzed for taking informed actions. Besides, using increasing quanta of manpower to expedite the task at hand seems unfeasible as it tremendously increases the running business cost. Text Summarization, powered by Deep Learning (Deep Neural Networks or DNN) and Natural Language Processing (NLP), commonly referred to as Deep Learning Text Summarization, is the answer to this business challenge. DNN and NLP being subsets of Artificial Intelligence (AI) / Machine Learning (ML), help machine read, summarize, and analyze even unstructured data for better decision making.

What is Text Summarization?

Text Summarization is the process of automatically generating natural language summary from a lengthy input document while capturing the important points. The summarized text is appended as metadata to the digitized document. Capturing detailed metadata is a very important aspect of Enterprise Data Management. It helps in easy and fast search as well as retrieval of the document and its related information at a later date. Essentially, Deep Learning Text Summarization is a technology framework, which is built to capture and organize a variety of unstructured data from different sources.

The goal of Deep Learning Text Summarization is to retain maximum information along with optimum shortening of text ‘without altering the meaning of the text’. Most summarization systems utilize ‘extractive approaches’. However, an abstractive approach is an upcoming concept, which provides a realistic summary.

AI algorithms enabled a Tier 1 US Bank in auto-analysis and auto-classification of a high volume of documents to make them searchable and retrievable.
Watch now >>

Different types of approaches – Extractive Summarization & Abstractive Summarization

In Extractive Summarization, systems form summaries by copying parts of the source text through some measure of weightage and importance given to words and then combining those parts of sentences together to render a summary. Here, the importance of the sentence is based on linguistic and statistical features.

In contrast, Abstractive Summarization attempts to produce a ‘bottom-up summary’. In Abstractive Summarization, systems generate new phrases, possibly rephrasing or using words that were not in the original text. Naturally, abstractive approaches are harder. For creating a perfect abstractive summary, the model has to first truly understand the document and then try to express that understanding in a condensed format, using new words and phrases.

Superiority of Abstractive over Extractive

Extractive Text Summarization is a very old technique and it’s not proper summarization. When humans summarize a piece of text, they read it entirely to develop an understanding of the content and then write a summary highlighting the main points. However, Extractive Summarization gives importance to word weightages only.

Abstractive Summarization is a process where the algorithms are capable to build own sentence just like a human. Deep Learning sequence models have shown promising results in Abstractive Text Summarization as compared to Extractive Summarization.

Download whitepaper on "Deep Learning" to understand its use in text summarization of lengthy documents and business audits


How does DNN work?

Traditional rule-based AI is not able to generate new sentences. The recent advancements in DNNs have changed the scenario. DNNs use ‘sequence-to-sequence’ model to predict a new sentence. Long Short Term Memory (LSTM) is one such DNN model used in Abstract Summarization.

An LSTM network is a recurrent neural network that has LSTM cell blocks in place of standard neural network layers. It feeds back the output of a neural network layer at ‘time t’ to the input of the same network layer at ‘time t + 1’. Recurrent neural networks are “unrolled” programmatically during training and prediction. Here at each time step, a new word is supplied – that is, the output of a previous cell is supplied to the network at each time step. A new word/sequence value is progressively concatenated to the previous output.

After processing, the framework can predict new words, sentences, or phrases. In this manner, the ‘LSTM concept’ can be used to build the Abstractive Text Summarization framework.

The framework uses encoder-decoder architecture model for summarization. Both encoder and the decoder sub-models are trained jointly, meaning at the same time. The Text Summarization framework uses an encoder for reading the source document and encoding it to an internal representation. The decoder is a language model responsible for generating each word in the output summary using the encoded representation of the source document.

Sanjeet Banerji, EVP & Head - AI & Cognitive Sciences elucidates how AI enables the Digital Transformation journey in the Banking sector.
Watch now >>

In summary

Text Summarization framework is a very powerful Deep Learning powered framework. It is based on Advanced NLP and DNNs. This framework is capable of generating new summarized logical sentences or group of words from a lengthy document. It can be used in many business use cases such as Enterprise Data Management streamlining, document summarization, email summarization, customer review summarization, etc. So rest assured. By using this framework powered by NLP and DNN, your next audit assignment will be a breeze.