this.cache(...) in preload
At present there's no way to set a cache header for a page. It might be useful to be able to do this:
export default {
async preload({ params }) {
this.cache('max-age', 30 * 60 * 1000); // cache for 30 minutes
await this.fetch(...).then(...);
}
};
(An interesting question that arises is whether that should change behaviour on the client as well. Should the browser cache the preloaded data for the specified duration and reuse it if the same URL is visited later? This would make some navigations faster but would increase memory usage and introduce a danger of mutation bugs, so perhaps it's better to rely on the service worker.)
Or would it be better to have a more generic this.headers({...}) function?
I'm in favour of having something like this in Sapper. I feel that your suggestion, especially in the general form of this.headers({...}), is related to #362, especially to my comment about route-specific middleware for things like caching API responses server-side, but which could also be used for setting headers. Another solution to both problems would be to add a middleware function to routes that has access to req and res:
export default {
async middleware({ req, res }) {
res.set('Cache-Control', `max-age=${30 * 60 * 1000}`); // cache for 30 minutes
// do other stuff with `req` and `res`
}
};
The advantage of this approach is that you can do anything that Express or Polka allow you to do, and are not limited to the functions that Sapper exposes on the context object.
The disadvantage is that you forfeit the ability of also doing something like caching things client-side as you propose above with this.cache(...).
I would prefer a server-specific solution like @nsivertsen's myself.
I'd instinctively associate a this.cache() inside preload with the preloaded data only -- meaning no matter what crazy thing I did inside preload, the returned data would be cached locally, where local means either the client or server.
Or would it be better to have a more generic this.headers({...}) function?
This is IMO a better solution because if I will set a cache header on server, browser will cache the page also (in addition to the intermediate caching proxies). I will probably also need to set a Vary header. I think Sapper should provide access to the request object also like Next.js do in getInitialProps. It may look like:
preload({req, res}) {
if (req) this.headers(...)
}
Just ran into this issue. Unfortunately the caching is breaking my authenticated only pages. It would be great to have a way to control the caching for a page, or more importantly, to turn it off for specific pages.
After playing with this a bit more, I think this caching breaks apps with authentication entirely.
Any attempt to redirect the user if not authenticated, from server-side (preload) is broken by the 10 minute page cache, so if you log in, you still get redirected, likewise if you were logged in, you can still see those pages for 10 minutes after you log out.
...and any attempt to check auth clientside (oncreate) means that you see the secured content first, before the javascript renders, but not only that, if you try to load some authorised content in your preload, that will make any page redirect to the sapper error page first (when the API sends a 401), before your redirect has had time to send the user to login.
This is what I've ascertained so far anyway, please correct my thinking if I'm wrong.
Hey, any update on this, as noted by @antony this is quite a big blocker on any app that requires authentication (or am I just doing it wrong?), as any movement to cached html pages persist the state of either being authed or un-authed until cache refreshed. For now I have had to fork and copy @nsivertsen's 'fix' https://github.com/sveltejs/sapper/issues/389#issuecomment-437400332
Could we maybe add a build flag for now that disables html caching?
I second this as a feature. In our case, we are constantly releasing new articles, and due to the page cache, it takes 10 minutes before new articles are loaded as part of the front page, when users arrive. Currently workaround from my side is to delete the production replicaset in Kubernetes every time a new article is released. It means though, that the site is down every time for approx. 5 seconds, but at least the cache is gone. (ps. I couldn't make the 'fix' referred to work properly. Would have been a lot easier.)
@Rich-Harris are you able to shed any further light on this?
Hey, It would be great to have this option, currently, we get content from an API, when the content changes the SSR version of the page is outdated until a deploy happens, is there any way to deal with this besides redeploying?
Hey, It would be great to have this option, currently, we get content from an API, when the content changes the SSR version of the page is outdated until a deploy happens, is there any way to deal with this besides redeploying?
I'm a bit confused. Why would fetching your data via an API cause the SSR part to not refetch when it is requested? the preload method will be called each time the server is hit.
@antony maybe I'm doing something wrong then, I'm suspicious that the fact that I'm not using this.fetch might be the cause, I'm consuming a graphql endpoint and have made a custom client for it that uses node-fetch or window.fetch depending of the environment, could this be the cause? I ruled out that the API cache was outdated (if I navigate to another link and back I get the fresh version)
What you're doing with fetch seems fine (but I don't know why since Sapper does that abstraction for you). If you navigate to another link and back, then preload will run in your client and get new data. Given that this is the case, where are you seeing outdated data?
Are you talking about fetching the same SSR page twice and getting the same original preloaded data, then, I guess 1) I don't experience this, and 2) the default cache in Sapper is only about 5 minutes anyway, so it's unlikely to require a redeploy to refresh, and 3) the preload is always run on the client regardless of whether the SSR one is run or not which will get your updated data.
So I guess maybe you have a bigger problem with your hosting set-up? If I'm understanding correctly.
@antony Thanks a lot for your input, you gave me a lot of new things to test out, I'll get back with an update if I manage to rule out those possibilities or find the issue, thanks!
@antony found the issue, I was using ApolloClient as the graphql client and it has some cache builtin, it's behaviour got weird when that get executes from the server, maybe every SSR request was sharing the in-memory cache from apollo. thank you so much!