guidance icon indicating copy to clipboard operation
guidance copied to clipboard

Enhance the cache ability

Open SimFG opened this issue 2 years ago • 6 comments

issue: #113 local test code

import time

import guidance

# set the default language model used to execute guidance programs
guidance.llms.caches.cache_creator = guidance.llms.caches.DiskCache.default_cache_creator
guidance.llm = guidance.llms.OpenAI("text-davinci-003")

# define a guidance program that adapts a proverb
program = guidance("""Tweak this proverb to apply to model instructions instead.

{{proverb}}
- {{book}} {{chapter}}:{{verse}}

UPDATED
Where there is no guidance{{gen 'rewrite' stop="\\n-"}}
- GPT {{gen 'chapter'}}:{{gen 'verse'}}""")

# execute the program on a specific proverb
start_time = time.time()
executed_program = program(
    proverb="Where there is no guidance, a people falls,\nbut in an abundance of counselors there is safety.",
    book="Proverbs",
    chapter=11,
    verse=14
)
print(executed_program)
print("consume time: ", time.time() - start_time)

start_time = time.time()
executed_program = program(
    proverb="Where there is no guidance, a people falls,\nbut in an abundance of counselors there is safety.",
    book="Proverbs",
    chapter=11,
    verse=14
)
print(executed_program)
print("consume time: ", time.time() - start_time)

there is the screenshot of the second running result image

SimFG avatar May 25 '23 14:05 SimFG

@microsoft-github-policy-service agree

SimFG avatar May 25 '23 15:05 SimFG

There are many changes in this pr, please help to review. @slundberg thanks

SimFG avatar May 25 '23 15:05 SimFG

Thanks! Will review today.

slundberg avatar May 25 '23 15:05 slundberg

Looks good overall. Two questions/comments:

  1. Could you explain how people can change the cache backend depending on their preferences? In other words how do they set the cache? I had imagined assigning a cache object to .cache of either the class or the object, but it looks like you have a different setup that I would like to understand the motivation behind.
  2. We may want to move it out of guidance.llms but I am not sure about that yet

Thanks again!

slundberg avatar May 25 '23 18:05 slundberg

@slundberg Hi, thanks your patient review.

  1. I've been struggling with how to initialize the cache for a while. Initially, I considered passing the cache as a parameter when initializing the "llm" object. However, since the cache is at the object level, when two identical "OpenAI llm" objects are created, they use the same cache, making the object-level cache redundant. Therefore, I decided to keep the cache at the class level and initialize it using a static assignment method. Like:
guidance.llms.caches.cache_creator = guidance.llms.caches.DiskCache.default_cache_creator
  1. Since this cache is designed specifically for handling "llm" requests, it's meaningless outside of the "llms" context. Thus, I placed the caches content in the "llms" directory.

SimFG avatar May 26 '23 00:05 SimFG

@slundberg I have resolved the conflict, help me checkout it.

SimFG avatar May 27 '23 00:05 SimFG

@slundberg Hi, do you have more suggestions? Maybe you can merge the pr, because the current pr should not have serious logic errors. 🤝

SimFG avatar May 30 '23 13:05 SimFG

Sorry was out for a day, and am catching up. Will get back to this as soon as I can (tomorrow I think). (and help merge any conflicts I created with the last pushes)

slundberg avatar Jun 01 '23 04:06 slundberg

@slundberg has handled the conflict, and thanks your reply 😆

SimFG avatar Jun 01 '23 04:06 SimFG