useCache will return a wrong destination with strategy alwaysProvider in multiple tenant case
Describe the Bug
useCache will return a wrong destination with strategy alwaysProvider in multiple tenant case
Steps to Reproduce
Please check the attached reproduce steps.
how to reproduce this issue.pdf
Expected Behavior
in multiple tenant case, the system should return the correct destination for different tenants, otherwise the system will get the data from a wrong destination.
Screenshots
No response
Used Versions
- Node version via
node -v: v22.16.0 - NPM version via `npm -v: 10.9.2
- SAP Cloud SDK version: 4.0.1
- For CAP users, CAP version: 9
Code Examples
No response
Log File
No response
Affected Development Phase
Production
Impact
Impaired
Timeline
No response
Additional Context
No response
Hi @beily524 ,
Thanks for reporting this.
As a context, Cloud SDK have isolation strategy for different levels.
The default tenant-user strategy use user_id (from the User JWT) together with tenant id (zid or app_tid from the subscriber tenant JWT) and destination name as parts of the cache key.
From the screenshot you shared, it looks like the isolation strategy is set to tenant (automatic fallback if there is no user_id or user JWT). Because the key is constructed with only tenant id and destination name.
What I find weird is that you are trying to create many subscriptions within the same subaccount, this is unusual as normally there can be only one subscription from one subaccount. Tenant and subaccount are 1:1 relation. Maybe you want to change the design and create subaccounts for each tenant subscription?
Have a look at https://help.sap.com/docs/btp/sap-business-technology-platform/providing-multitenant-applications-to-consumers-in-cloud-foundry-environment?locale=en-US
Hello @ZhongpinWang, thank you a lot for your fast reply, I also tried to use the isolation strategy tenant-user, but it still does not work.
We have the same setup as mentioned in your link, the destination is created in the 'Provider Tenant', but all the consumer tenants are getting the same destination that is created for the first called consumer.
Is it possbile to book a teams call so that I can show you the issue and find a solution for it? Customer has complained a lot for this peformance issue. My mail id is [email protected]; Thank you!
Hello @ZhongpinWang, thank you a lot for your fast reply, I also tried to use the isolation strategy tenant-user, but it still does not work.
We have the same setup as mentioned in your link, the destination is created in the 'Provider Tenant', but all the consumer tenants are getting the same destination that is created for the first called consumer.
Is it possbile to book a teams call so that I can show you the issue and find a solution for it? Customer has complained a lot for this peformance issue. My mail id is [email protected]; Thank you!
Hi @beily524 ,
Not sure if the setup is the same. In the link, there can be only one subscription aka. one consumer in one subaccount. In your pdf, I see "tenant consumer-71 (belongs to the same subaccount as consumer-1)". This is not trivial.
In the provider account, Cloud SDK uses tenant id to see which tenant = subaccount = consumer is calling. Having multiple consumers in one subaccount is problematic. You need one dedicated subaccount for each subscriber = consumer = tenant.
Hello @ZhongpinWang , sorry for the late reply—I just got back from vacation.
Apologies again, my previous explanation was not accurate. What I meant is: "tenant consumer-71 has the same provider account as consumer-1", Our system setup is as follows:
There is one provider subaccount where the app is deployed, and the destination is also defined here. There are several consumer accounts (each with a dedicated tenant) that have subscribed to the app. In my previous testing, I used consumer-1 and consumer-71 to access the app, and they belong to different subaccounts.
I was told that the cache functionality used to work well, and this issue only occurred after we upgraded the package @sap-cloud-sdk/connectivity from 3.15.0 to 4.0.1. Maybe your team can try to reproduce this.
In any case, this is a huge data leak risk, as a consumer may get the destination from another consumer, and thus may access data that does not belong to them.
Hi @beily524 ,
Thanks for the clarification. One thing that I still don't understand is that Cloud SDK uses zid or app_tid from the subscriber JWT to set the cache key and get cache with it. Somehow your consumer-71 is sending a request with the consumer-1's subscriber JWT. Can you check the subscriber JWT sent from consumer-71 and what is its zid or app_tid? Is it not the consumer-1's tenant ID?
I would guess cache was actually not used in your project with Cloud SDK v3 as it was not enabled by default unless useCache was explicitly set to true. Starting from v4, cache is enabled by default.
Here is the call flow how we decide the cache key:
-> getDestinationCacheKey()
-> getTenantCacheKey()
-> getTenantId()
-> getJwtForTenant()
I would recommend checking the subscriber token sent from consumer-71, why it has consumer-1's zid or app_tid
Hello @ZhongpinWang ,
I would guess cache was actually not used in your project with Cloud SDK v3 as it was not enabled by default unless useCache was explicitly set to true. Starting from v4, cache is enabled by default.
I think this is the reason why it works well in v3 and not in v4.
Please check the document in the attachment.
tokens information and debug detail.pdf
Kind Regards, Beily Zhao
Hi @beily524 ,
I went through all points and I think now I understand the issue.
The caching part works IMO correct. tenant isolation strategy is also correct. It actually doesn't matter as you set the selectionStrategy to alwaysProvider, which anyway uses the provider account to fetch and build the cache key.
The actual issue comes from the jwt property when you call getDestinationFromDestinationService(). The subscriber jwt is added before destination got cached. This is an issue and we will solve it.
Thanks again for your help!
Hello @ZhongpinWang ,
Thank you a lot for your continues support. Please let me know in case the issue is resolved. Thank you again!
Kind Regards, Beily Zhao
Hi @beily524 ,
We created a backlog item. Unfortunately, we might not be able to work on it in the near future due to other priorities and low capacity. Thank you for your understanding.
Best regards, Junjie
Hi @jjtang1985 ,
Thank you a lot for this information.
Kind Regards, Beily Zhao