Pagination with multiple entities


Is there a way how to prepare API for pagination or lazyload purposes? I am thinking about filtering DS but I can’t find suitable filter which is similar to LIMIT, OFFSET or FETCH clause in SQL. How would you tackle this task?

There are two parameters build in at each entity: _pageNumber and _pageSize

The only restriction to use it is not to have multiple root datastructure entities in a datastructure.

How to use it: (pagination paragraph)

Thanks for reply. We found out that the restriction is more severe. DataStructure has to be only single element without any subsequent childrens, eg ROOT/Entity/fields nothing more.

I’ll create feature request to allow more deep DataStructures.

That is not realistic and I believe also not right. If you need to load more details to a table you paginate you should use a separate API endpoint for loading the details. In other words – you solve the master-detail pagination on the client side.

The pagination feature’s goal is to load as less data as possible. So it does not (and will not) support loading details together with the master list.

I think it depends on situation. From my point of view, I believe developer should decide how much data he wants to fetch in pagination action. Let’s say I want to make list of business partners with their roles. One way is to make lookup field for role. Other way would be creating DS as usual and paginate with deeper DS. If I want to display 50 rows per page, I am supposed to make 50 API calls just to fetch few fields? The more data I want to include under master the more additional work needs to be done to achieve this.

I don’t know how much is innefficient to join tables in limited SQL queries though.

But do you want to display all that information at once?

The way SQL is being produced and executed is completely incompatible with your request and would require a major rewrite. Plus you would probably not gain a lot. You can also use array-type parameters so you do not need to iterate over 50 rows, just send 50 ids in one query and you would be fine.

Basically what you want is quite a performance optimization. Is your project so demanding? Then I would offer considering things like a NoSQL database where you can store multidimensional data with pagination supported. Or storing multiple values inside a single record in a database (e.g. JSON string). That way you would get what you need – pagination using a single call with multiple details. But it all comes for a price of an extensive work from your side. Nevertheless, these would be the right ways on how to achieve your goal.

It is actually very simple task. I am making table of business partners with additional information from child entities. It is just few extra fields. But you are true about array parameters and I wasn’t aware that it’s not so easy doable as it seemed to me. I will use pagination for getting master data and then populate them with details on second call.
I think it is overkill to implement new database and synchronize it to Origam as a main database system or to store data mentioned just to achieve this niché thing. What do you think about getting those few fields to master entity with lookup fields? So I will have data I want, I will be able to paginate them and I won’t need to change things a lot.

What about using a view with comma separated values? Like here.

Or the same but by using a custom database function for that matter.

Another option would also be to have an XSLT transformation on your Data Page and use a LookupList function to read the details.

XSLT transformation seems to be optimal for my needs. Thanks for discussion and help Tomas :slight_smile:

Oh, and it remind me how I did it in Evero - I am using WorkflowMethod there instead of filterset to load my full datastructure.

Workflow method is a way how to load data by workflow, but it’s getting used anywhere where a filterset could be used.

In the first step of workflow I load the list with a simple data structure returning only ids and is controlled by pagination, and then (in the second step) I load another - full datastructure by ids (get just by xpath from previous’s step result). It’s really simple and elegant (without need to use a transformation which is hard to transform to a typed json).

1 Like

Yesterday I was asked to solve the pagination for another service and this time I had the following situation:

main entity (ME)

  • detail subentity (DS)
  • subnentity to filter (SF)
  • inner joined grouped subentity with aggregated sort column (SS)

The problem was, that while I was splitting it into list and detail datastructures, I realized, I have to keep sort subentity (SS) in both datastructures (In list and in detail as well, since I need to sort the details as well as the list).

Then, Tomáš find out the following solution:

  1. Keep DS_Complete
  2. Create DS_List, containting only Main Entity (ME) with id column, then SF (filter subentity) and SS (sort subentity)
  3. Create DS_Detail, containing ME with full fields and DS (detail subentity)
  4. In workflow (called as ‘workflow method’), first load DS_List, but merge the result to DS_Complete
  5. then load details with list Ids parameter and merge the result again to DS_Complete. Merging the result to the already filled DS_Complete (with ids only from previous step) is the trick that preserves order without a need to replicate SF into DS_Detail. It saves the database and make the list service to work better.

Note: In order to be able to fill the first Id result only to DS_Complete I had to set ‘DisableConstraints’ to true for the DS_Complete.