Update spec.selector in r11s deployment manifests
Description
Updates k8s Deployment manifests for Routerlicious so spec.selector.matchLabels matches all the label values in spec.template.metadata.labels instead of just one of them.
K8 documentation is a bit confusing around this topic, it says that if several Deployments have overlapping spec.selectors, they might start to compete to manage the same pods and cause unexpected issues. However, it seems that only applies to pods created manually (with labels that match the selectors for more than one deployment/replicaSet). Pods that are created by Deployments (or technically, by the ReplicaSets created by Deployments) also have metadata.ownerReferences values which Deployments/ReplicaSets also leverage to know exactly which pods they should try to manage. For pods that have a value there, that dictates who should manage it; for pods with no value there, then the label selectors come into play and Deployments/ReplicaSets could start competing.
Since we (should) only have pods created by the Deployments themselves, the label selectors (spec.selector.matchLabels) don't really matter right now, but I wanted to update them to match the labels that the pods for each Deployment end up getting (spec.template.metadata.labels) so we further minimize the risk of weird things happening where these charts are deployed.