CleanArchitecture icon indicating copy to clipboard operation
CleanArchitecture copied to clipboard

Added Cache implementation & Invalidation for InMemory & Redis provider

Open ShreyasJejurkar opened this issue 6 years ago • 30 comments

This PR will add Caching behavior to the Clean Architecture template. While building this feature my main focus was on implementing this feature without doing any breaking changes to the current implementation and without adding too many complexities for getting caching feature up and running. Right now there is an implementation for InMemory and Redis Cache. And one can do other implementations as well if they would like to. The following are some FAQs and can answer most of the questions.

FAQ

  1. How can I start caching my query results?

    Ans: To do so, you have to just decorate the query with the CacheQueryResponseAttribute attribute to your query request like below.

      [CacheQueryResponse]
      public class GetCustomerList : IRequest<List<CustomersDto>>
      {
    
      }
    

    By decorating it, your queries will start caching the results into the cache. Nothing else configuration is required.

  2. How does cache key for queries gets generated? Ans: Cache key for queries gets generated based upon the public properties of your queries and their values respectively. For example, one of the cache key will look like this GetCustomerDetailById|CustomerId|1 for getting customer detail who has Id = 1

  3. My query has 10 public properties and I want to set my custom cache key for that query, How can I do that?

    Ans: In real-world query can have a number of properties, and no one wants to get the cache key gets big and start consuming space and memory. And sometimes one can want to set a custom cache key for their query rather than having a default one based upon properties. For that to set custom cache key and other properties like (Cache expiration) you just have to use other overloads of CacheQueryResponseAttribute. Following is an example of using it.

    [CacheQueryResponse("MyCustomCacheKey",60000)]
    public class GetCustomerOrderDetail : IRequest<CustomerOrderDto>
    {
        //.....
       //.. lot of properties
     }
    
  4. I want to use Redis cache instead of InMemory, How do I configure it?

    Ans: Right now I have implemented in such a way that you can configure these options from appsettings.json file from the WebUI project. And based on settings in that file, the infrastructure will register corresponding cache implementation to the DI container.

    "UseRedisCache":  true,
    "UseInMemoryCache": false,
    
  5. I want to configure my own version of caching, and don't want to use Redis or InMemory

    Ans: One can do that by just implementing the ICacheService interface, that interface contains some most widely used cache methods that you can implement as per the need. Make sure that you put that class inside the Services folder under the Infrastructure project. And just configure the service in DI container under Infrastructure/DependencyInjection.cs a file like below.

    services.AddMyCustomCache();
    services.AddTransient<ICacheService, MyCustomCacheService>();
    
  6. How do I invalidate cache data if my command changes it. Ans: Decorate commands with the InvalidateCache attribute and by providing queries or a list of queries for which you need to invalidate the cache.

    [InvalidateCache(typeof(GetTodosQuery))]
    public class CreateTodoListCommand : IRequest<int>
    {
       public string Title { get; set; }
    } 

I am really looking forward to hearing from the community and from @jasontaylordev as well about the implementation and approach. Feel free to ask any questions.

Following discussion might be outdated!

ShreyasJejurkar avatar Apr 23 '20 11:04 ShreyasJejurkar

Hi @MCCshreyas , I was just thinking about how this would fit into clean architecture when I was directed towards this pull request.

I like how you have control over which queries can be cached, and for how long.

However, have you thought about cache invalidation? That is always the tricky part of caching. For example, if there was an UpdateCustomerOrderDetailCommand, it would some how need to communicate this with GetCustomerOrderDetailQuery to invalidate the current cache.

danielmackay avatar Apr 24 '20 01:04 danielmackay

However, have you thought about cache invalidation? That is always the tricky part of caching. For example, if there was an UpdateCustomerOrderDetailCommand, it would some how need to communicate this with GetCustomerOrderDetailQuery to invalidate the current cache.

Yes, of course. When I was implementing this behavior at the very first stage, I was caching every single request which is coming in the Application layer. Then I got thinking that why should we cache the response from Command, then I try to implement pipeline in such a way that the pipeline behavior only gets applied to queries and not command. But I found that I am introducing a new interface to IRequest and all those corresponding things, which I think I am adding unnecessary complexity to the existing one. So I drop that approach as well.

So at last, I think it will be good to let the user decide which queries he want to cache and not every query response. The user knows very well that which data may change frequently and which may not be, so based on that he can ask behavior to cache the query result by implementing ICache<TRequest> and also can modify cache corresponding properties by implementing SetCacheOptions method.

Your concern is almost correct, and I will work on it as well, but right now the implementation that I did works well on data which may not be changed frequently (simplest example - options in the dropdown which gets fetched from database table), or the data you know after how many time it does so that you can tweak the cache expiration time for same.

ShreyasJejurkar avatar Apr 24 '20 05:04 ShreyasJejurkar

Thanks a lot, @MCCshreyas. I really appreciate your efforts - this is something I've been wanting to add.

I am currently doing some similar work on authorisation, you can review that here https://github.com/jasontaylordev/CleanArchitecture/compare/authorization and https://github.com/jasontaylordev/CleanArchitecture/issues/56#issuecomment-617802759.

I'd like to see if the caching approach can be in line with the authorization approach. Please switch from using an ICache interface to a ResponseCacheAttribute. You can use the ASP.NET Core approach for guidance; https://docs.microsoft.com/en-us/aspnet/core/performance/caching/response?view=aspnetcore-3.1.

Then for authorization and caching, users familiar with ASP.NET Core will have a headstart.

Thanks again. 😊

jasontaylordev avatar Apr 24 '20 05:04 jasontaylordev

@jasontaylordev Thank you very much for the reply.

I understand that you suggest using attribute rather than interface implementation, I think about the same a while ago when I was implementing this behavior. For example, the attribute will be like this. [Cache(CacheKey, ExpirationTime)]

But I came to know one user scenario where the user needs to set a custom cache key and the user has one method that generates the custom key based upon request parameter like below.

public string GenerateCustomCacheKey(TRequest request) 
{ 
     return $"typeof(request).Name | blah blah" 
}

Then how should user take advantage of the cache attribute parameter, as we cannot call the method directly in the attribute, like below! (Reference - https://stackoverflow.com/questions/36622223/how-to-call-a-method-in-using-attributes-parameters)

[Cache(GenerateCustomCacheKey(GetCustomerQuery), 60000)]

The above code of line will give a compile-time error.

ShreyasJejurkar avatar Apr 24 '20 05:04 ShreyasJejurkar

@jasontaylordev - I agree with @MCCshreyas and believe that the cache key should be able to be set dynamically in code unless you can have a 'vary-by-param' attribute like how MVC Output caching does it. However, I think even that would be limiting.

Use case is a GetCustomerDetailsQuery where the customer would differ by ID.

Another example would be a search query, where there could be many parameters that would all need to be used to make up the cache key.

danielmackay avatar Apr 24 '20 06:04 danielmackay

@danielmackay will like to hear your thoughts on Cache Invalidation implementation!

ShreyasJejurkar avatar Apr 27 '20 17:04 ShreyasJejurkar

I think the best way would be to have a domain or application event that can be fired when an entity has been modified. We can then register handlers for that event that will clear the cache when fired.

For example, if you have a CreateCustomerCommand, this would fire the event, then any Customer related query could handle this event and clear the cache potentially using a custom cache key defined by the command.

As for the implementation, I'm not sure if you would want to implement a DomainEvents or ApplicationEvents class. Or maybe MediatR can be used with it's pub/sub functionality.

@jasontaylordev - do you have any thoughts?

danielmackay avatar Apr 27 '20 21:04 danielmackay

@danielmackay
So what I understand is that, rather than having PostProcessor operation for command, you prefer firing event (Meditr Notifications), and then receive at another end and clear the cache for that affecting query. It's a good option, but I failed to understand the advantage of using Event pattern rather than having PostProcessor operation for command because both are doing the same work. Or what I see post-processor operation would be a better place because that behavior we can use for many other things like one of them like cleaning cache, then others may be like committing changes to the database in case of transactions.

What I want to highlight is that we are treating PostProcessor operation just like the cleanup phase for command, that's why I included cache clear operation for query in it because I don't want to include it's in IRequestHandler making it altogether fat. In this way, we can separate the logic of saving data to the database and clearing cache for queries in a separate class with the help of PostProcessor behavior.

@jasontaylordev Please add some values 😁😁

ShreyasJejurkar avatar Apr 28 '20 06:04 ShreyasJejurkar

@MCCshreyas - apologies, I hadn't seen your implementation before I made my previous comment.

I have gone through your implementation, and I see how it works. The post processor has a potential code smell in that the command has to reference all related queries any be able to instantiate them in order to build up the list of queries to be invalidated.

I still think using Domain Events (doesn't have to use MediatR) would be simpler. If the commands fire a domain event, and the queries handle the event we are decoupling the link between them, and they have no knowledge of each other. The queries know how to create a key for their data, so that logic can stay in one place, and they simply call 'CacheService.ClearCache(key)'.

There are a couple of ways this could be done:

  1. QueriesHandlers implement the cache handler directly

  2. Have separate cache handlers for each query, but then the key generation would have to be encapsulated.

Perhaps the first option is simplest?

This is just a suggestion, and what you have done already is still very good.

danielmackay avatar Apr 28 '20 23:04 danielmackay

Hi @danielmackay, Thanks for the reply and review. Yeah, I was also thinking about the same when I was implementing this, that there is code smell in PostProcessorCommandHandler as we need to instantiate all queries for making their cache clean. I think I should try and implement the event-based cache cleaning that would be much cleaner I think. There are still some problems I know I am gonna face it while implementing the event, but I don't wanna discuss right now, unless and until I try and figure out fixing it. I will let you @danielmackay know, once I implement it.

@jasontaylordev In case of any suggestion or direction for implementation, please let me know!

Thanks.

ShreyasJejurkar avatar Apr 29 '20 04:04 ShreyasJejurkar

Hi @danielmackay , Right now I was trying to implement the Event-based pattern. But I need to discuss a couple of problems. Consider Customer updates his name and in UpdateCustomerNameCommandHandler we published the event CustomerNameUpdated passing the CustomerId. And then in CustomerNameUpdatedEventHandler, How do I know which queries are getting affected because of this command execution? And also in some way, although if we get to know that GetCustomerById query gets affected because of this, we can generate the cache key based on the parameter (CustomerId), but what happens if the user has configured the custom cache key for it? How would I know? Yes. We can find the CustomCache Key by calling the SetCacheOptions method, that's what I did in behaviour CacheInvalidatorPostProcessor under CacheHelper.GenerateCacheKeyFromRequest but for that again we need an instance of that query to call that method via reflection. That's why if you see my implementation, I am storing the Query Type and its object (implementation) in dictionary InvalidateCacheForQueries so that in CacheInvalidatorPostProcessor I can use that object to call SetCacheOptions to get the custom cache key if the user has configured or generate the default cache key based upon parameters and based on that I can clear the cache for that query.

For Example - If you update the name using UpdateCustomerNameById passing Id = 1 command then only I will clear the cache for GetCustomerById where Id = 1 and not any others.

Although we can use domain events, then again as event parameter I need to pass the type and object so that I can evaluate the cache key for that query. So adding domain event for each and every command handler, I created the PostProcessor behaviour so that I can do most of the work there rather than writing cache clear logic in each of the events.

I know it's a bit of code smell that we need to reference each of the queries in PostProcessor. If someone has a better approach on how to evaluate the affecting queries based upon command, then I am ready to implement it as well.

ShreyasJejurkar avatar Apr 29 '20 06:04 ShreyasJejurkar

@taylorc Thanks for the kind words! 😊 Right now I am working on improving the Cache Invalidation part, right now its implemented and it does work great. But based on community members' feedback I am looking to improve it further.

Yeah, I am missing with Tests, because I am still not very sure about some part of the implementation as it might change in the future.

BTW, if you would like to add Tests to the project based on the current implementation, then it's good to know. And you can write it no doubt.

@MCCshreyas A pull request on your Fork might be appropriate.

Sure!

ShreyasJejurkar avatar Apr 29 '20 06:04 ShreyasJejurkar

@MCCshreyas - I understand the problems you are having and why domain events might not work.

I think the current solution has some loop holes also. Consider this...

Lets assume that every query has a custom cache key based on all parameters in the query. the following cache keys, and their data have been added (all items belong to the same list):

  • "TodoList-1"
  • "TodoItem-1"
  • "TodoItem-2"
  • "TodoItem-3"

The current sample code to register the queries to invalidate looks like:

        public Task Process(CreateTodoItemCommand request, int response, CancellationToken cancellationToken)
        {
            QueriesList.Add(typeof(ExportTodosQuery), new ExportTodosQuery { ListId = request.ListId });
            QueriesList.Add(typeof(GetTodosQuery), new GetTodosQuery { });

            return Task.CompletedTask;
        }

Now if if we update TodoItem-2, how can your postprocessor create queries to clear at least "TodoList-1" and "TodoItem-2"? This could get really complicated, and I don't see how we can adjust the code above within each command to re-create the query objects needed to clear the relevant cached data. This is not an easy problem to solve.

One thought I had is that we could store a cache "category" along with the cache key. Then when an entity belonging to a specific category was modified (e.g. "todo" category), then we would clear all cache keys that were registered under that category.

This of course, would clear more data than needed, but seems like a simple way to solve the problem.

I think it might even be possible, to get this to work with attributes as Jason first mentioned.

If this makes sense to you, and you agree with the approach, I could raise a PR on your branch? If so, do I need to fork it first?

danielmackay avatar Apr 29 '20 12:04 danielmackay

Being a developer we know, if the second todo item got update then we need to clear the cache for the query which returns all the list GetTodosQuery. And also a query that fetched that data for that specific record like GetTodoItemById query. So we will write the following PreProcessor handler for it.

public Task Process(UpdateTodoItemCommand request, int response, CancellationToken cancellationToken)
{
    // We can get Id from request. 
    QueriesList.Add(typeof(GetTodoItemById), new GetTodoItemById { Id= request.Id });
      
     //As GetTodosQuery don't take any parameters, we can directly instantiate it. 
    QueriesList.Add(typeof(GetTodosQuery), new GetTodosQuery { });

    return Task.CompletedTask;
}

I know, it's not that much good approach, but still, it does work. And also we are giving choice to a user, in the end, to which query he want to cache or which are not, we are not caching every single query. What I need to work on this is how can I something by which we did not have to specify all queries here. We should give an option to the user to configure the cache invalidation so if in case if something updated, for which query we should update the cache, we can use a dictionary data structure for it. That's the approach I tried to implement here.

One thought I had is that we could store a cache "category" along with the cache key. Then when an entity belonging to a specific category was modified (e.g. "todo" category), then we would clear all cache keys that were registered under that category.

This approach seems ok to me. But think like we are in the production system, let's say for dropdown values (which get fetched from Db) we have a cache category created named 'dropdown', and one of the values for dropdown gets updated, added, or deleted. So for a single update/delete/add operation will you clear cache for all dropdown values? I think that will be a huge impact on a bigger scale.

If this makes sense to you, and you agree with the approach, I could raise a PR on your branch? If so, do I need to fork it first?

Yes, we can try your approach as well. You can have PR to my branch and then we can see how it goes. No worries. Or better way we will have both options available for the user for implementing cache invalidation, he will decide as per the requirement and data which needs caching.

ShreyasJejurkar avatar Apr 29 '20 14:04 ShreyasJejurkar

Hi guys,

my two cents: I agree with the above statement that the current implementation of invalidating the cache is not ideal and that there are a few issues with this general approach that make it a bit of a deal-breaker for me in its current implementation:

  1. It breaks the Open/Closed principle. If I add a new query that gets Todos that should ideally be hitting the cache, I'd have to modify the CreateTodoItemPostProcessor to add another invalidation rule.
  2. I would need to know that the CreateTodoItemPostProcessor is even there and that I would need to modify it. This goes against one of my favourite things about this solution in that it should be easy to do things right, hard to do things wrong. I can easily see a developer missing this requirement down the line when the application moves into support-land.
  3. Caching is traditionally a cross-cutting concern. That to me for CleanArchitecture means that as soon as we introduce logic related to any cross-cutting into the Request or Handler, we've broken that rule e.g. the ICache<T> interface. A possible exception to this could be if we just included second-level caching as part of your EF Core DbContext query instead. https://github.com/VahidN/EFCoreSecondLevelCacheInterceptor#4--to-cache-the-results-of-the-normal-queries-like

So far, I think that Jason's suggestion of using a caching attribute makes the most sense, but even this approach has some large drawbacks, especially when it comes to cache invalidation. We could create something like: [ResponseCache(VaryByProperties = "ListId", Duration = 30)] which would be placed on top of a query like ExportTodosQuery, but this does not allow for indefinite caching though as Duration is a required field. If we wanted to cache indefinitely, we'd also need to come up with a cache invalidation strategy, which would no doubt be pretty difficult.

GFoley83 avatar May 06 '20 11:05 GFoley83

@GFoley83 Thanks for the reply. Yeah, I know that current cache invalidation is not that much good and one can easily forget to add entries to PostProcessor as you said and I agree on that. Do you have any approach of invalidating cache, if possible I will try to implement that and will see how it goes!

So far, I think that Jason's suggestion of using a caching attribute makes the most sense, but even this approach has some large drawbacks, especially when it comes to cache invalidation. We could create something like: [ResponseCache(VaryByProperties = "ListId", Duration = 30)] which would be placed on top of a query like ExportTodosQuery

If you see earlier comments on this topic, I was also wanted to have an attribute, but there are some user scenarios where attributes basically won't work. Please see this comment

ShreyasJejurkar avatar May 09 '20 09:05 ShreyasJejurkar

Hi @MCCshreyas

I've added an example here: https://github.com/GFoley83/CleanArchitecture/commit/c95f17e6782affebf62ba6c1f7edbcfb19683468

TL;DR My commit includes the ability to:

  • Cache requests using a CacheAttribute. See ExportTodosQuery.
  • Invalidate a cached request by using a CacheInvalidateAttribute on any command that should invalidate a cached query. See almost any TodoItem or TodoList command e.g. UpdateTodoItemCommand.
  • A second, preferred method for invalidating a cache using MediatR notifications by sending notifications when TodoItems or TodoLists are created, updated, deleted etc.

There's quite a bit in that commit so I'll just touch on the important bits.

Note: if you do checkout and run my commit, both cache invalidation methods are enabled. So when the post process handler runs for the CacheInvalidateAttribute, the cached item will have already been removed by the notification handler.

CacheAttribute

CacheAttribute is placed on a request and at a minimum needs a comma separated list of property names that exist on the request, which will be used when generating the cache key:

[Cache("ListId")]
public class ExportTodosQuery : IRequest<ExportTodosVm>
{
    public int ListId { get; set; }
}

This will create a cache key of ExportTodosQuery|ListId|<int value>. By deafult, the request type name is used as the cache key prefix but you can also set your own. Cache Duration can also be set. I've purposefully left setting the property name as mandatory as I think creating a cache item shouldn't be too easy as the cache key is the most important part.

CacheInvalidateAttribute

The main problem with invalidating cached requests via attribute is that there is no easy way to link a query to the N number of commands that should invalidate its cache. Take ExportTodosQuery, which I used in my commit as an end-to-end caching example. If you want to invalidate a cached item for ExportTodosQuery then you need to be able to link that cached item to the following commands:

  • DeleteTodoListCommand
  • UpdateTodoListCommand
  • CreateTodoItemCommand
  • DeleteTodoItemCommand
  • UpdateTodoItemCommand
  • UpdateTodoItemDetailCommand

The problem here is that some of these commands have no shared properties or logic with ExportTodosQuery. Take for example DeleteTodoItemCommand, which in the current version of CA, only needs the Id of the todo item that you want to delete. Even though it could easily be linked to a cached ExportTodosQuery, if there's no ListId then there's no way to invalidate the cache. In my example I've added the ListId into the command equest so that I could have something to link it to ExportTodosQuery. To be fair, you could make a case that it should be there anyway.

[CacheInvalidate(CachedQuery = typeof(ExportTodosQuery), PropertiesUsedForCacheKey = "ListId")]
public class DeleteTodoItemCommand : IRequest
{
    public int Id { get; set; }
    public int ListId { get; set; }
}

For UpdateTodoListCommand, ListId is just Id so I also added the ability to map property names e.g.

[CacheInvalidate(CachedQuery = typeof(ExportTodosQuery), PropertiesUsedForCacheKey = "ListId", MatchToProperties = "Id")]
public class UpdateTodoListCommand : IRequest
{
    public int Id { get; set; }
    public string Title { get; set; }
}

The goal here is that once we've identified which commands we should place an attribute on to invalidate our cached query, we want to have enough data in the command to reliably recreate the cache key.

Cache invalidation by notification

The is my preferred way of invalidating the cache. I've added enums to model the event type and notification models to broadcast the event data. For example, in the DeleteItemTodoCommand

. . . . 
await _context.SaveChangesAsync(cancellationToken);
await _mediator.Publish(new TodoNotification()
{
    EventType = TodoEvent.Deleted,
    TodoItem = entity
}, cancellationToken);
. . . .

Then I have two notification handlers specifically setup for invalidating the ExportTodosQuery when a TodoItem, is created, updated or deleted and the same again for a TodoList. The benefit of this approach is that I could add as many handlers as I like for a notification. If I add a new command in the future that should also invalidate a cached query, I don't need to update the existing notification handlers, I could just create a new one, just for that event. e.g.

public Task Handle(TodoListNotification notification, CancellationToken cancellationToken)
{
    // There's nothing to invalidate for newly created TodoLists
    if (notification.EventType == TodoListEvent.Created)
    {
        return Task.CompletedTask;
    }

    var cacheKey = CacheHelper.GenerateKey(typeof(ExportTodosQuery),
        CacheConstants.ExportTodosQueryPropertyCacheKey,
        notification.TodoList.Id.ToString());

    _cacheService.RemoveCacheValue(cacheKey);

    return Task.CompletedTask;
}

There are pros and cons to both approaches. Having to add a CacheInvalidateAttribute onto every command that should invalidate an associated cached object is cumbersome. You could easily miss a command or forget to add the attribute if you added a new command that technically should have it.

GFoley83 avatar May 10 '20 09:05 GFoley83

Hi @GFoley83 , Thanks for the reply. Really appreciated.

The problem here is that some of these commands have no shared properties or logic with ExportTodosQuery. Take for example DeleteTodoItemCommand, which in the current version of CA, only needs the Id of the todo item that you want to delete. Even though it could easily be linked to a cached ExportTodosQuery, if there's no ListId then there's no way to invalidate the cache. In my example I've added the ListId into the command equest so that I could have something to link it to ExportTodosQuery. To be fair, you could make a case that it should be there anyway.

This is funny though, I was also in the same situation where ListId was missing from some commands. So I added to get that working.

Yeah, I see the file diff of that commit to understand how does it work. But still, there are a couple of questions that I need to ask.

A company wants to set a cache key with the following format <company_name>|<module_name>|<application_name>|<request_name>|Parameters and they have specific algorithm or method that generate the cache key based on request parameter like below

public static string GenerateCacheKey(TRequest request)
{
   // logic
  return cacheKey;
}

Then how would they call this method in attribute? As following code will throw a compile-time error. And also we cannot pass the argument there [Cache(GenerateCacheKey(??))]

And also I notice a lot of reflection is getting used intensively for getting a reference to the actual property based upon the CacheAttribute string parameter and also in CacheInvalidationAttribute. And also if the parameter is nullable and how it's going to handle the cache key?

And also about performance downgrade because of lots of reflection into the behaviors, as it will be having a greater impact on response time as the behavior reflection code is getting executed for every single request.

And also changing parameter list also need to change the cache key attribute, this might to a little error-prone as one will remove the parameter from the query, but one can miss to remove it from Cache attribute, then we need to handle the runtime NullReferenceException or some ReflectionException in behaviors, I don't know how does it look like to have exceptions in behaviors directly. That's why If you see my approach if the user did not specify any specific cache key and then I am using properties automatically to generate the cache key and if specified, then the behavior will use the same for cache key.

And also there is one more problem which was in my implementation as well, it's not a problem actually but might look like a code smell, as we need to reference every single query type into the command like multiple of CacheInvalidator attribute on command. And also you have to use reflection again in behaviors to have reference to those queries. To overcome this same issue I use SharedContext to hold a list of queries and their corresponding object in the dictionary so that I can avoid reflection in behaviors. You can read about SharedContext here

These are just my initial thoughts on the implementation looking at the diff, I will surely check out the commit to deep dive into it.

In the meantime, looking for thoughts of other members.

ShreyasJejurkar avatar May 10 '20 10:05 ShreyasJejurkar

Hi MCCShreyas., I think we're on drastically different pages when it comes to what the requirements are for caching. I'm looking for a solution that doesn't allow cross-cutting concerns into the application layer, break any design principles (e.g. SOLID) and as Jason was looking for it, caching via attributes.

A company wants to set a cache key with the following format <company_name>|<module_name>|<application_name>|<request_name>|Parameters and they have specific algorithm or method that generate the cache key based on request parameter like below

This doesn't sound like a great requirements to me but in any case, if you do actually need this then you wouldn't use attributes for caching. The point of attributes (which I think is why Jason was looking for an attributes-based caching solution), is that they're meant to be light-weight and not complex. They don't support methods by design.

And also about performance downgrade because of lots of reflection into the behaviors, as it will be having a greater impact on response time as the behavior reflection code is getting executed for every single request.

Your solution uses as much reflections as this one does? E.g. CacheHelper and CreateTodoItemPostProcessor so this seems contradictory to me. Regarding performance impact; potentially but in my opinion it would most likely be negligible. If you use attributes, you have to use reflection. Performance trade off (if any) is acceptable for a simpler solution, in my opinion.

And also if the parameter is nullable and how it's going to handle the cache key? And also changing parameter list also need to change the cache key attribute

I actually took this feature out. Currently it's not nullable by design but could easily be if I wanted to generate the cache key from every property automatically, which for when you want to invalidate the cache, is a bad idea.

Not looking to get into a whose-solution-is-better debate. I'm actually not even a fan of using attributes, at all, period! Jason was looking for a solution to request caching via attributes and you were looking for a solution to invalidate the cache, that doesn't involve breaking the open/closed principle or adding logic that should ideally be a cross-cutting concern. I provided two different solutions for that part. My commit is a first stab at these approaches and if nothing else, highlights the challenges around using attributes for caching and some design thinking that's required to implement it.

Also I think you missed my other solution using Notifications for invalidating the cache, which is my preferred solution and is much cleaner than using a cache invalidation attribute. See "Cache invalidation by notification" above.

GFoley83 avatar May 10 '20 20:05 GFoley83

Not looking to get into a whose-solution-is-better debate. I'm actually not even a fan of using attributes, at all, period! Jason was looking for a solution to request caching via attributes and you were looking for a solution to invalidate the cache, that doesn't involve breaking the open/closed principle or adding logic that should ideally be a cross-cutting concern. I provided two different solutions for that part.

Not even me. I just highlighted what I feel, might be I am wrong as well. And also we are discussing the problem and not debating to be clear, my main goal is to have a solution that will simpler and easy to use and also efficient.

Also I think you missed my other solution using Notifications for invalidating the cache, which is my preferred solution and is much cleaner than using a cache invalidation attribute. See "Cache invalidation by notification" above

Yeah. I really liked this approach of having notifications for invalidating cache, and also we can have multiple notification handlers for the same notification, that's quite good. (@danielmackay this is what near to domain events right, as you were referring to have it?)

I'm looking for a solution that doesn't allow cross-cutting concerns into the application layer, break any design principles (e.g. SOLID), and as Jason was looking for it, caching via attributes.

I'm not really getting it, so if having an interface (nothing mandatory to implement as such in it) to TRequest will break the rule of introducing cross-cutting concern into the application layer, and not breaking when using an attribute? I think the interface or attribute it's just a way to implement a feature for the thing. I don't know which is good or which is bad, I think both have their pros and cons but what I feel is that having interface will give more flexibility to customize the behavior in our case the cache key.

Ok let it be, having an attribute is what is wanted, then it's better to have it then.

I will close this PR if needed.

ShreyasJejurkar avatar May 11 '20 05:05 ShreyasJejurkar

I can see the merits of both implementations of the cache. @GFoley83 is right in saying Jason wanted an attribute approach.

In respect to the generation of the cache key you could change the static CacheHelper to an interface such as CacheKeyProvider and have the template provide a DefaultCacheKeyProvider implementation allowing for users to provide their own provider (pardon the pun).

Agree with both of you that the event driven approach is nice.

Could we use attributes for caching the object then the event based cache invalidation?

In short, my team has had this discussion over the past few weeks in respect to a feature to be implemented in a few sprints time. The team decided to use a Cached Service Implementation where needed over a mediatr pipeline. For example in the context of the template, we use an ITodoItemsCachedService that has the IApplicationDbContext and IDistributedCache or IMemoryCahce injected into it. Our original plan was closer to @MCCshreyas but we ended up with cached services being easier to review in Pull Requests because the intention for us was clearer.

taylorc avatar May 11 '20 13:05 taylorc

The team decided to use a Cached Service Implementation where needed over a mediatr pipeline. For example in the context of the template, we use an ITodoItemsCachedService that has the IApplicationDbContext and IDistributedCache or IMemoryCahce injected into it

Let me guess did you use Scrutor for decorating services, so that the call will be going to CachedService first and then original service for getting data !?

ShreyasJejurkar avatar May 11 '20 13:05 ShreyasJejurkar

No we didn't Scrutor. Never heard of it until now. In the query we plan to either inject the cached service interface or the ef context interface The reason for our team is clear intention for the query. When it came down to it our use cases were reference data and volatile and/or expensive data. For reference data it is making sense for us to do that use the cached service. For volatile and/or expensive data its making sense to cache at the gateway api or appliance or website endpoint for short periods of time such as 1-30 seconds. A two second cache saves about 2000 DB calls in one of our test cases without affecting end users too much with stale data. We may move to an event driven approach but we don't see the need as yet.

I guess the moral is for more expansive use cases teams will self solve but for simple cases attribute caching might just be enough.

On Mon, 11 May 2020, 23:20 Shreyas Jejurkar, [email protected] wrote:

The team decided to use a Cached Service Implementation where needed over a mediatr pipeline. For example in the context of the template, we use an ITodoItemsCachedService that has the IApplicationDbContext and IDistributedCache or IMemoryCahce injected into it

Let me guess did you use Scrutor https://github.com/khellang/Scrutor for decorating services, so that the call will be going to CachedService first and then original service for getting data !?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jasontaylordev/CleanArchitecture/pull/110#issuecomment-626697545, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEVMAZARLDBPDAATVBD5QLRQ73TBANCNFSM4MO66ASQ .

taylorc avatar May 11 '20 13:05 taylorc

@taylorc yeah okk. There is way using scrutor about the same. Have a look at following video. https://youtu.be/m77s48tTdfU

ShreyasJejurkar avatar May 11 '20 13:05 ShreyasJejurkar

@taylorc

In respect to the generation of the cache key you could change the static CacheHelper to an interface such as CacheKeyProvider and have the template provide a DefaultCacheKeyProvider implementation allowing for users to provide their own provider (pardon the pun).

Agreed re: turning CacheHelper into an interface; I just refactored MCCshreyas to suit my needs to quickly pull something together. The hardest part is actually not coming up with the cache key; it's knowing how to recreate/retrieve the cache key when you're trying to invalidate. There needs to be a well defined set of rules or convention.

Could we use attributes for caching the object then the event based cache invalidation?

Yep. I should have made this clearer above; this would be the preferred approach of the solutions we have so far.

For example in the context of the template, we use an ITodoItemsCachedService that has the IApplicationDbContext and IDistributedCache or IMemoryCahce injected into it. Our original plan was closer to @MCCshreyas but we ended up with cached services being easier to review in Pull Requests because the intention for us was clearer.

I absolutely take your point around intention; it's one of the side-effects of having logic such as caching implemented as a cross-cutting concerns. It may well be that we end up making caching a first class citizen of the Application layer and rely on injecting services as and where needed. My personal preference for the moment at least is to try and iterate on a few different solutions to see if we can keep it as a cross-cutting concern, without overcomplicating or over-engineering things.

GFoley83 avatar May 12 '20 01:05 GFoley83

@MCCshreyas I'm not following all the discussion but Cash policy could not be made like:

AddInvalidationPolicyFor<Query,Command>( x => list.id); 

And the cash key be set like

AddCashKeyFor<Query>(x => x.name + x.id.toString())

It seams like it would fit perfectly for simple user approach

EDIT: I've tried some code to validate the idea and it came a little like this:

    public static class CashHelper
    {
        private static Dictionary<Type, Delegate> transformations;

        public static void  RegisterKeyTransformationMethodFor<Query>(Func<Query, string> keyGeneratorMethod)
        {
                  transformations.Add(typeof(Command),keyGeneratorMethod);
        }


        public static string GetCashKeyFor<Type>(Type element)
        {
            if (transformations.TryGetValue(typeof(Type), out  Delegate method)) {
                return (string) method.DynamicInvoke(element);
            }
            throw new Exeception ("Did you forgot to subscribe your query to the Helper ?");
        }

     }

This makes very simple to generate keys using funtions like

CashHelper.RegisterKeyTransformationMethodFor<MyQuery>(x => "prefix" + id + "other thing" ) ;

And retrive it with

CashHelper.GetCashKeyFor<MyQuery>( myQueryElement ) ;

I think this can make a really readable and customizable approach without having so much code on attributes

pcarvsilva avatar May 13 '20 20:05 pcarvsilva

Works nice. but would be great being able to turn off the cache for any. Something like both "UseInMemoryCache" and "UseRedisCache" be false. I did not manage to make it working even turning off the "IPipelineBehavior" and "IPipelineBehavior"

public static IServiceCollection AddApplication(this IServiceCollection services, IConfiguration config)
       {
           services.AddAutoMapper(Assembly.GetExecutingAssembly());
           services.AddValidatorsFromAssembly(Assembly.GetExecutingAssembly());
           services.AddMediatR(Assembly.GetExecutingAssembly());
           services.AddTransient(typeof(IPipelineBehavior<,>), typeof(AuthorisationBehavior<,>));
           services.AddTransient(typeof(IPipelineBehavior<,>), typeof(PerformanceBehaviour<,>));
           services.AddTransient(typeof(IPipelineBehavior<,>), typeof(ValidationBehaviour<,>));
           services.AddTransient(typeof(IPipelineBehavior<,>), typeof(UnhandledExceptionBehaviour<,>));
           
           if(config.GetValue<bool>("UseInMemoryCache") || config.GetValue<bool>("UseRedisCache")){
               services.AddTransient(typeof(IPipelineBehavior<,>), typeof(CachingBehaviour<,>));
               services.AddTransient(typeof(IRequestPostProcessor<,>), typeof(CacheInvalidatorPostProcessor<,>));
               
           }
           services.AddScoped<InvalidateCacheForQueries>();
           
}

Any ideas?

alcidesfmelof avatar Nov 01 '20 18:11 alcidesfmelof

@alcidesfmelof Ok, got it. I will get this implemented. Thanks!

ShreyasJejurkar avatar Nov 02 '20 10:11 ShreyasJejurkar

What's the state of this PR?

maikelvanhaaren avatar Mar 14 '21 21:03 maikelvanhaaren

@LeonardSpencer It's there for reference!

ShreyasJejurkar avatar Mar 15 '21 10:03 ShreyasJejurkar