Cartago®Live DataManager

Digital data management
Standardised. Structured. Automated.

Data inputs, processes and outputs

The Cartago®Live Data Manager is the lifeblood of successful communications to your customers. The service fulfils all the requirements associated with data processing. This includes data collection, validation, conversion, filtering and reporting. The data can be used for further processing, print outputs or archiving outputs. As processes become more and more automated, having these outputs in structured and machine readable data formats has become increasingly important.

Automated and standardised data management

Automated customer communication presents considerable challenges when it comes to the secure and efficient handling of data. Our Cartago®Live Data Manager incorporates the separate solutions that have often historically been distributed across organisations into a single software architecture within which the data is synchronised. A standardised data management system enables our data manager to support the transmission of structured data using electronic data interchange. It significantly raises efficiency and unlocks further savings potential. Cartago may offer a standardised solution, yet it still allows for a certain degree of personalisation, ensuring that your precise needs are met quickly and efficiently.
Play Video

Data collection (data warehouse)

Data collection is the process of connecting the organisation’s existing system to the Cartago solution. The Cartago®Live Data Manager is able to integrate data from a wide variety of sources in many different ways. The data can be transmitted directly from the existing system (push) or requested at a specific time (pull), depending on the requirements. If necessary, the data can also be reloaded during data processing. Cartago offers preconfigured data collection interfaces for a variety of CRM, ERP and other sector specific systems, including ones used in insurance and personnel management. Following minor project-specific adjustments, these systems can be up and running within a short time frame.

Data validation (analysis/quality assurance)

All data inputs are checked with respect to their syntax and content. This will also involve keeping a record of the data validation process. Subsequent workflows are either paused or highlighted to be “reloaded” later, depending on the settings.

Data processing

The next step is to process the data. This is done in a single or, where appropriate, multi-level process, either individually or as data packets. The data are ready for further processing as soon as they have been converted into the designated structure. Data can also be filtered, transformed, cleansed or consolidated.

Optional: Data visualisation (reports & reporting)

After data processing, or even data output, you can choose to have the data displayed visually. Visual elements are great for representing information such as utilisation profiles or frequency. They can also be seen as a convenient way of keeping a check on your data. The module can be used to generate a load profile for the hardware you are using in the cloud (licensing). It can, for instance, provide online retailers with an accessible way to see when most of their orders are placed. This would help them know when they need to scale up their capacities to cope with increased traffic. Having the right technologies at your disposal at exactly the right time improves efficiencies and achieves cost savings. After all, you only want to be paying for the services you are actually using.

Data outputs

The outputs from the system are the inputs for the document generation. An automated process issues the data in an independent output format. Invoicing data, for example, is issued in the ZUGFeRD XML data format, the current standard for electronic invoices in German speaking countries.

Data provision (repository)

The repository is the data warehouse within which the data is physically held. This central data repository may hold anything from a company’s General Terms and Conditions of Business through product images and other imagery in different resolutions to videos and PDFs. Authorised persons or third party systems can access the repository and utilise the data at any time. Cartago’s repository offers a data storage system that is based on a service-oriented architecture (SOA) with a web-based interface (Thin Client).

Key benefits

Technical information

Cartago services

The data repository is part of our basic service. We also offer our customers a range of services and processes that can be tailored to their precise needs.

User interface

As a process driven system our data manager does not have a standard user interface.

Data output

The majority of the data is used for the purpose of document generation. The output formats are therefore typically PDF, PCL, ZPL and HTML. As processes become more and more automated, having these outputs in the form of structured and machine readable data has become increasingly important. For example, electronic invoices are issued with embedded data in the ZUGFeRD XML data format, the current standard for electronic invoices in German speaking countries.

Data collection

Data can be collected using a service-oriented (REST or SOAP) approach, via Hot Folders, automated to be synchronous or asynchronous, or individually via customised workflows.
Cookie Consent with Real Cookie Banner