Skip to end of banner
Go to start of banner

Load Request

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

At this point, the following steps should already be achieved by the Data Integration (DI) team:

  1. Create the JSONL files https://boxalino.atlassian.net/wiki/spaces/BPKB/pages/252149803/Data+Integration#1.-Export-JSONL-files

The load request is done for every document JSONL (following the required Data Structure) required for your Data Integration process, once they are generated.

For a product data sync, make sure the following tables have been prepared and are provided: doc_product, doc_attributes, doc_attribute_values and doc_languages

Requirements

The following requirements must be followed if you want to save the JSONL (#1) and load the JSONL (#2) in your custom project.

If you are happy to use the Boxalino service for content storage & processing, you can ignore the requirements list.

The load request will subject your content to:

  1. saving your JSONL (for every data type required for the data sync type) in a Google Cloud Storage Bucket from the Boxalino project

    1. for Instant Updates : the files have a retention policy of 1 day

    2. for Delta Updates: the files have a retention policy of 7 days

    3. for Full Updates: the files have a retention policy of 30 days

  2. loading every JSONL in a BigQuery table in Boxalino project

Boxalino will provide access to the data Storage Buckets and BigQuery datasets that store your data.

BigQuery Datasets

Using different datasets for different data integration processes (delta, full, instant) will allow the set of default table expiration. By doing so, you are following BigQuery Storage Optimization best practices as described in the documentation.

1. Naming

The BigQuery dataset in which your account documents are loaded can be named after the data index and process it is meant to synchronize to:

  1. if the request is for the dev data index - <client>_dev_<mode>

  2. if the request is for the production data index - <client>_<mode>

, where:

  • <client> is the account name provided by Boxalino.

  • <mode> is the process F (for full), D (for delta), I (for instant)

Example, for our an account boxalino_client, the following datasets must exist (upon your integration use-cases):

  • for dev index: boxalino_client_dev_I, boxalino_client_dev_F, boxalino_client_dev_D

  • for production data index: boxalino_client_I, boxalino_client_F, boxalino_client_D

The above datasets must exist in your project.

2. Location

Upon the creation of your dataset, please use the Data Location: EU

3. Permissions

In order to have read & write access to your private dataset, please provide the following permissions to the Data Integration Service Account (DISA) 55483703770-compute@developer.gserviceaccount.com:

  1. BigQuery Data Viewer

  2. BigQuery Metadata Viewer

  3. BigQuery Data Editor / Owner

  4. BigQuery Job User

or BigQuery Admin to the created datasets datasets.

Storage Bucket

1. Naming

In your custom project must be available the storage buckets in which #1 step can be done (loading your generated JSONL document).

Follow the Google documentation on how to create storage buckets.

The storage buckets have the requirement to be unique within the scope of the integration. Please use the following naming formula:

  1. <your-custom-project>_<your-boxalino_account>_dev

  2. <your-custom-project>_<your-boxalino-account>

2. Location

For content storage, please use either Multi-region (eu) or Region (europe-west1).

3. Permissions

In order to have load & read access to your private Google Cloud Storage buckets, please provide the following permissions to the Data Integration Service Account (DISA) 55483703770-compute@developer.gserviceaccount.com:

  1. Storage Object Creator

  2. Storage Object Admin

The above permissions can be replaced with the Storage Admin role.

Request Definition

Endpoint

full data sync

https://boxalino-di-full-krceabfwya-ew.a.run.app

1

delta data sync

https://boxalino-di-delta-krceabfwya-ew.a.run.app

2

instant-update data sync

https://boxalino-di-instant-update-krceabfwya-ew.a.run.app

3

stage / testing

https://boxalino-di-stage-krceabfwya-ew.a.run.app

4

Action

/load

5

Method

POST

6

Body

the document JSONL

7

Headers

Authorization

Basic base64<<DATASYNC API key : DATASYNC API Secret>>

8

Content-Type

application/json

9

project

(optional) the project name where the documents are to be stored;

10

dataset

(optional) the dataset in which the doc_X tables must be stored;


if not provided, the service will check the <index>_<mode> dataset in the Boxalino project, to which you will have access

11

bucket

(optional) the storage bucket where the doc_X will be loaded;

if not provided, the service will use the Boxalino project.

12

doc

the document type (as described in Data Structure )

13

client

your Boxalino account name

14

dev

only add it if the dev data index must be updated

15

mode

D for delta , I for instant update, F for full

technical: the constants from the GcpClientInterface https://github.com/boxalino/data-integration-doc-php/blob/3.0.0/src/Service/GcpRequestInterface.php#L18

16

tm

time, in format: YmdHis
requirement: the same tm value must be used from the begging of the DI process until the end, for all files.

technical: used to identify the version of the documents and create the content.

17

ts

timestamp, must be millisecond based in UNIX timestamp

technical: calculated from the time the data has been exported to BQ; the update requests will be applied in version ts order; https://github.com/boxalino/data-integration-doc-php/blob/3.0.0/src/Service/Flow/DiRequestTrait.php#L140

18

type

integration type (product, order, etc)
https://github.com/boxalino/data-integration-doc-php/blob/3.0.0/src/Service/GcpRequestInterface.php#L22

19

chunk

(optional) for loading content by chunks
see https://boxalino.atlassian.net/wiki/spaces/BPKB/pages/415432770/Load+Request#Load-By-Chunks

A LOAD REQUEST code-sample is available in the data-integration-doc-php library: https://github.com/boxalino/data-integration-doc-php/blob/3.0.0/src/Service/Flow/LoadTrait.php

Optionally, if you are to use an advanced rest client , you can configure it as such:

Advanced REST Client SYNC request structure

Integration Strategy

  • doc_attribute, doc_attribute_value, doc_language can be synchronized with a call to the load endpoint

  • doc_product / doc_order / doc_user must be loaded in batches / via public GCS link

    • GCP has a size limit of 256MB for POST requests

    • we recommend avoiding it by receiving a public GCS load URL (steps bellow)

Load By Chunks

In order to upload the content in chunks, the following requests are required:

  1. make an HTTP POST call to /load/chunk endpoint .

    1. The HTTP headers must include a new property - chunk.

      1. the chunk value (number or textual) can be the order of the batch / pagination / SEEK strategy used for content segmentation / etc

    2. This endpoint returns a public URL. This is used to load content https://github.com/boxalino/data-integration-doc-php/blob/3.0.0/src/Service/Flow/GcsLoadUrlTrait.php

      1. the received link is generated by GCS. It will be unique per each loaded segment.

  2. with the response from step #1, load the document content (PUT)

    1. https://github.com/boxalino/data-integration-doc-php/blob/3.0.0/src/Service/Flow/LoadByChunkTrait.php#L24

  3. repeat step#1 + step#2 (in a loop) until all your product/order/customers content has been uploaded

    1. the chunk value is updated (as part of the iteration)

    2. the same tm value must be used

  4. make an HTTP POST call to load/bq endpoint

    1. It will inform BQ to load the stored GCS content to your dataset

    2. https://github.com/boxalino/data-integration-doc-php/blob/3.0.0/src/Service/Flow/LoadBqTrait.php


  • No labels