In most cases the imported data has a relation to a certain time scope (e.g. October stock prices or 2008 economic growth rate, ...) That time scope is tracked so you always have a proper overview of what's there and what's missing. Incoming data will be analyzed with sophisticated statistical methods to derive it's structure and datatypes to minimize the effort of manual adjustments.
If you have worked with data, you know that loading and transforming data is one of the most time consuming jobs when mining for information. A file is provided too late, corrections and new data are all contained in the same file or data content is overlapping. Our mELt processes ensures that imports of this kind of data will always correctly be merged with the existing stock. This is possible as in REPODS the target table has knowledge of the objects it contains. When a customer file is loaded twice, the mELT process will detect which customers are already present in the target table and will not make any change the second time. If a few corrections are contained in the second file only those few customers will be updated and put under proper version control by mELT.
Relations between tables are an essential construct to describe information in larger contexts. The truly valuable insights are most often generated by combining many interconnected subjects into one analysis. Well designed relations between your table entities are a key ingredient for later being able to drill and slice to the data. There is no need to adhere to a certain modelling scheme like Star Schema or Snow Flake. Our Reporting engine can drill through an arbitrary ER-Model.
To create a report you navigate the Data Model along the defined relations and pick attributes for your report along the way. A suitable chart will be derived from the report result to give you a first visual impression. Specific fine tuned visualizations can be created in our infographics environment.
With a workbook style editor you can develop custom queries on your data model using the full power of the PostgreSQL 10 SELECT instruction. Inspect Query results in place and document your work with markdown alongside the code.
The Control Panel gives you a live view of current and past processes that modified data in your Data Pod. You can setup a Flow Manager that automatically resolves dependencies between processes and loads data packages fully automated from import all the way into a report. Here you can also control our custom versioning mechanism. You can keep data states fully live recoverable while avoiding version overkill at the same time.