Handling 3rd party services with serverless hosting
Problem
Serverless hosting setup does not behave exactly like our dev environments and we run into these issues
- logging system not reporting errors to sentry.io
- Then discussing how to handle our new discord bot integration here https://github.com/garageScript/c0d3-app/pull/949#discussion_r667349580 that would also run into the same issues.
- which also brought up that the email signup issues experienced with the old provider were probably also serverless related
After realizing there are a lot of quirks I think it is best to handle the issue out in the open with a solid design doc on our best practices moving forward.
The root of the problem is when the lambda receives the res.send(..) command responding to the server any unresolved promises get killed (or really frozen? still don't understand all the lambda details) like fetchs or database writes. If something must be done we have to await it before sending the response. We also only have 10 seconds before the lambda times out
Goal
With c0d3-app being a nextjs app hosted on vercel our express backend gets converted into serverless lambda functions and I think it will be best if we have a central understanding of how that effects the application. Ideally we can come up with a set of best practices and use the same pattern throughout the codebase with this design doc moved to the wiki when finalized.
Architecture
TODO: Add diagram of basic lambda spin up -> loop(freeze -> thaw) -> kill process
Research
My initial thought was to make sure 3rd party services always return a status 202, like js5/problem 7 but the reality is that still adds extra round trip time and we are not incontrol of all 3rd party apis.
I just found this blog post that goes into great detail of how lambdas work and suggests using res.end(null,null,()=> action to wait on) pattern or something like that still not quite sure on how that callback works or if it will be easy to apply that pattern when our graphql server is handling the response.
Lets start the discussion and get on the same page here :)
I'd add this to issues with serverless. I resorted to reverting my pr and using local storage check because I couldn't make it work with sequelize (considering that someone wrote a very lengthy guide about setting up sequelize for aws I wasn't the only one). And while we have migrated from sequelize (yay!) the same problem could happen with prisma. One container uses all connections, aws freezes it and thaws another container but max limit for connections is already exhausted.
Not really related to this issue, but this blog post goes over some technical details on how AWS Lambdas works, saw it on HN today. Might get something useful out of it though.
https://www.bschaatsbergen.com/behind-the-scenes-lambda
HN discussion https://news.ycombinator.com/item?id=27792951
Possible utility function mentioned off of the sentry fix mentioned in #985. It is some serious "monkey patching" as they call it but I think it might work as a general utility.
Here is some sudo code of how it might go down
const willResolve = thePromise => {
// Save a reference to original res.end
const origEnd = res.end
// make new end function
async function newEnd(...args) {
// wait for your promise to resolve
await thePromise
// then return real res.end
return origEnd.call(this, ...args)
}
// swap in your new function that will wait for your promise when called
res.end = newEnd
// return the original promise and continue on :)
return thePromise
}