Overview
To integrate your Data with Boxalino, you need to follow 4 steps:
1. Export JSONL files
Export your data in JSONL for each selected Data Type of the Data Structure.
JSONL = Newline delimited JSON ( https://en.wikipedia.org/wiki/JSON_streaming )
Make sure to name each file with the name starting with the Data Type (e.g.: “doc_product”), with the mode in the middle ('F': full, ‘D': delta) and finishing with the Datetime (YYYYMMDDHHMMSS) of when the export has started (it’s important to have it when it started, as if you have then new delta starting later but finishing before, they will need to be replayed based on their Datetime).
So as a result, file names should be like: “doc_product_ F_YYYYMMDDHHMMSS.json”
2. Load files to Google Storage
Load your files into a Google Cloud Storage bucket you need to set-up in your GCP environment.
More information about how set-up your GCP envrionment and how to load files into Google Cloud Storage can be found here : Integrate your Data in Boxalino BigQuery Data Science Eco-System
You can put an automated expiration for the files so they don’t accumulate over time
3. Load files to BigQuery tables
Create new tables with the Field Schema provided at the end of each Data Structure (e.g.: here for the doc_product table) for each of the file with exactly the same name in a BigQuery DataSet you need to set-up in your GCP environment.
More information about how set-up your GCP envrionment and how to load files from Google Cloud Storage to BigQuery can be found here : Integrate your Data in Boxalino BigQuery Data Science Eco-System
Make sure to use the full provided field Schema and not to use auto-detect from your JSON, as otherwise, only a part of the table structure will be created and Boxalino processing will fail as it expects the full table schema to be available.
4. Trigger Boxalino Process
There are different ways to trigger Boxalino Process, please contact us to get the adequate suggestion.
Add Comment