The SUMMA Platform is a tool for aggregating and analysing various news items (text, audio, video). The Platform consists of multiple NLP (natural language processing) modules, including Automatic Speech Recognition (ASR), Machine Translation (MT), Named Entity Linking (NEL), Knowledge Data Base (KDB) , event clustering, topic detection, sentiment detection and story-line summarisation. Each module is developed independently by a team that is focused on that module. The goal of the Baseline Architecture is to provide a maximum independence so that each team is free to choose whatever technologies are most appropriate, on the condition that each module honours the API contract.
It also provides the User eXperience (UX) interface with the relevant visualisations for news data enriched by the above mentioned Natural Language Processing (NLP) modules.
An example of the SUMMA integrated platform UI is presented below, showing Deutsche Welle content processed through the platform. It shows highlights, video image and play function, transcript in the original language and in English, obtained through automated transcription and translation, an automatically generated summary. It also lists topical keywords and highlights names in the text.
The platform offers a fully automated monitoring system, ingesting content via API. After ingestion, it automatically transcribes all audio from video, turning speech into text. It also automatically translates all text (from original text articles or from transcribed speech to text) into English. It uses that to come up with a cross-lingual overview of the content, clustering related items into stories, summarising stories and individual items, adding topical keywords and named entities, adding sentiment analysis. The platform offers entity as well as full-text search and different visualisations, including a list view, tile view and heat map view.
The platform supports the three major use cases foreseen, i.e. external monitoring, internal monitoring and data journalism. Each use case is described here.
We have built a common monitoring platform which is robust and flexible. It caters for different applications, target groups and supporting modules. The platform is Docker-based, making it possible to smoothly add or change components. Flexibility is key and we have focused upon that objective. The prototype currently processes nine languages (English, German, Spanish, Portuguese, Arabic, Russian, Farsi, Latvian and Ukrainian).
A common system and UI was built to support the different use cases, to ensure maximum flexibility, shared resources and consistency. Dashboards were developed on top of that integrated platform, allowing for diversification and customisation in terms of visualisation and preferences. This resulted in several smaller prototypes being built by the user partners supplementing the integrated system.
The SUMMA Platform architecture has three core goals:
- Integrate NLP tools into the common pipeline for both batch and stream processing modes
- Provide UX interfaces based on user requirements and use cases
- Ensure BigData scalability (ability to process 200–400 live streams)
The SUMMA process flow is captured in the diagram below, indicating the content flow from user input to user output, with SUMMA processing modules in between.
SUMMA encompasses 10 technology components
- ASR: Speech recognition
- Meta: Metadata extraction from broadcast media
- MT: Machine translation
- CT: Streaming implementation of Storyline Clustering and Topic detection
- ETL: Entity Tagging \& Linking
- KB: Knowledge Base Construction
- FC: Forecasting and Fact Checking
- SP: Story-level semantic parsing
- SH: Story highlight generation/summarisation