gemma.cpp icon indicating copy to clipboard operation
gemma.cpp copied to clipboard

[Feature] ISS-60: Implement Self Extend

Open jonpsy opened this issue 1 year ago • 18 comments

#60

  • [x] Single Query case (MHA and GQA)
  • [x] Batch query case (MHA and GQA)
  • [ ] Eval strategy defined
  • [ ] Test suite written
  • [x] Main dev done

jonpsy avatar Oct 18 '24 02:10 jonpsy

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

google-cla[bot] avatar Oct 18 '24 02:10 google-cla[bot]

@jan-wassenberg would need your CR here, its in alpha stage right now. Let's go back and forth on this. Thanks

jonpsy avatar Oct 18 '24 02:10 jonpsy

Ooh nice :) Please note that we can only take pull requests on the dev branch. That code has just changed to replace template arguments with a runtime argument. Would you mind updating/rebasing your code to that?

jan-wassenberg avatar Oct 18 '24 17:10 jan-wassenberg

My bad, let me do the needful! Thanks for the pointer though.

jonpsy avatar Oct 19 '24 06:10 jonpsy

Haha I took so long to understand how main branch worked now I have to re-do it with this new base branch

jonpsy avatar Oct 19 '24 06:10 jonpsy

Note to self: Was able to compile gemma dev branch by commenting out tls.stats.Notify(stats); line on file: compress-inl.h for clang version: arm64-apple-darwin23.4.0

Had to do this because the compiler has strict check against not sending non-trivial args to variadic method. Maybe I could've turned off that feature in clang using -Wno-non-pod-varargs but it didn't work for me.

^^ This issue should be resolved now

jonpsy avatar Oct 19 '24 08:10 jonpsy

Nit: I noticed when I ran clang-format on gemma-inl.h that some committed lines were re-formatted. Currently, I kept the committed line as is and only stuck to my code.

I suppose the team has some internal flow to run clang-format on the entire repo before merging which I'm not aware of, so I'll leave that decision onto you.

jonpsy avatar Nov 01 '24 14:11 jonpsy

No worries, both the result of clang-format, or leaving it unchanged, are fine. Our IDEs indeed auto-format on saving.

jan-wassenberg avatar Nov 01 '24 14:11 jan-wassenberg

Hm looks like there's static_cast is failing in gcc ubuntu

jonpsy avatar Nov 01 '24 18:11 jonpsy

hm, the build error I'm currently seeing seems to be due to an extra/unnecessary & in line 309, "const hwy::Divisor&". We want to just construct one instance, not a reference.

jan-wassenberg avatar Nov 04 '24 09:11 jan-wassenberg

EDIT (IGNORE BELOW): Took longer than expected but here's a tech doc, will be easier to collaborate here. I pasted the same comment there, let me know if I got your email wrong?

@jan-wassenberg Let me highlight an issue here:

Summary: I want to mutate ModelConfig in runtime

Background: Currently in run.cc we take the input model to match against the predefined models (kModelFlags) defined in common.cc. In future also see there's a comment of loading ModelConfig from the model itself. This will make it highly static.

Let's say I want to modify Gemma 2B to run self extend with some group size, I should be able to configure this in runtime, without hard coding anything in the config file, that's the entire point of this paper (to be able to increase context window at inference time).

My proposal:

Approach 1: Modification of ModelConfig on runtime

A basic approach would allow modification of ModelConfig via RuntimeConfig and consume only necessary parameters from it. Then we could define

class ModelConfig {
    
   int MutateModelConfig(const RuntimeConfig& runtime_config) {
         // do some validation
          
         // consume self_extend params, and modify model config with some validation
         this->layer_configs = runtime_config.layer_configs

    }

Pros:

  • No changes in Gemma side, and the definition of "ModelConfig" remains intact (i.e. defines model behaviour).
  • Allows leeway for future ModelConfig runtime behaviour change, which I don't think is far-fetched

Cons:

  • Work done vs Reward ratio seems weak

Approach 2: Create member variables for Gemma

  • Basically allow Gemma to store specific values from runtime_config and store it inside member variables i.e. self_extend_, ngb_size_, grp_size_

Pros:

  • Simple to implement
  • Keeps the sanctity of "runtime" behaviour

Cons:

  • Additional member variables (Should we create a class to hold generic behaviour altering variables like these?)

jonpsy avatar Nov 05 '24 17:11 jonpsy

hm, I understand. FYI ModelConfig has some larger changes coming up in order to allow serializing it to disk. Let's therefore minimize the number of changes to ModelConfig itself. How about in ModelWeightsStorage and Gemma we add a MutableModelConfig() accessor function that returns a non-const reference that we can modify?

jan-wassenberg avatar Nov 06 '24 14:11 jan-wassenberg

Hi @jan-wassenberg, thanks for the prompt reply. I suppose you mean we do something similar to how config is being accessed currently?

// gemma.h
ModelConfig& GetMutableModelConfig() const { return model_.MutableConfig(); } 
// weights.h
 ModelConfig& MutableConfig() const { return config_; }

In LayerConfig, define it as

// configs.h
class LayerConfig {

  /**
   * Self-extend
   * Jin, Hongye, et al. "Llm maybe longlm: Self-extend llm context window without tuning." arXiv preprint arXiv:2401.01325 (2024).
   */
  bool self_extend = false;
  // Self-extend neighbor size
  size_t se_neighbor_size = std::numeric_limits<size_t>::max();
  // Self-extend group window size
  size_t se_group_size = 1;

In our app we should allow storing of self extend params inside it, something like this

struct LoaderArgs {
  // Self-extend
  Tristate self_extend;
  size_t se_group_size;
  size_t se_neighbor_size;
}

Finally, in run.cc we can access the mutable config and fill parameters inside it via LoaderArgs

// run.cc

void ApplySelfExtendIfGiven(Gemma& model, const LoaderArgs& loader)  {
 ModelConfig& config = model.GetMutableConfig();
 if (loader.self_extend != TriState::kTrue) {
    return;
 }

 // Modify layer config in-place
 auto& layer_configs = std::move(config.layer_configs);
 std::transform(layer_configs.begin(), layer_configs.end(), layer_configs.begin(), [](
){
     layer_config.self_extend = loader.self_extend;
     layer_config.se_group_size = loader.se_group_size;
     layer_config.se_neighbor_size = loader.se_neighbor_size;
 });
}

void Run(LoaderArgs& loader, InferenceArgs& inference, AppArgs& app) {
   // post CreateGemma
   Gemma model = CreateGemma(loader, pools);
   ModelConfig& mutable_model_config = model.GetMutableConfig();
   ApplySelfExtendIfGiven(model, loader);

⚠️ There's a minor issue here, LayerWeightPtrs holds const ref to LayerConfig, it will hold on to previous version. I don't see it being used currently

jonpsy avatar Nov 10 '24 15:11 jonpsy

Nice, this looks good to me, except that the lambda's layer_config argument seems to have been omitted?

LayerWeightsPtrs.layer_config is used by Reshape in weights.h. I believe this is fine: the LayerConfig are stored in a vector owned by ModelConfig, and we could modify them there. Any existing const-reference to them can be thought of as a pointer, so they will see any updates to the underlying storage, made via your new Mutable() accessor function. Does that make sense?

jan-wassenberg avatar Nov 14 '24 16:11 jan-wassenberg

Woah, that's an interesting insight :) and yes I see the use of layer_config as well. Let me spin this up real quick!

jonpsy avatar Nov 15 '24 14:11 jonpsy

Okay! Done with the changes, and its working with and without the runtime config 🍾

Tried it on a sample, and it completely fails 😄 . I highly doubt it has to do with the pos value I'm modifying.

Moving on: Unfortunately, I'm unable to run LongLM because I have mac and it has some issues with flash_attn module. If I could compare it that'd be great.

jonpsy avatar Nov 19 '24 18:11 jonpsy

Example: Input prompt

Here are a major wars have a global warming temperatures could lead to the environment, the environment, the global warming temperatures could lead to the threat, the likelihood of a lack of a lack of a lack of a threat from the lack of a lack of the lack of the of the of a lack of war, the of a threat, the lack of the lack of the global conflicts could lead to the lack of the presence of a lack of a lack of an increase in order and that a threat, the lack of a lack of the global instability and that a global instability and that a threat, the of past conflicts are the of the of war could lead to war could be high likelihood of instability and that a threat, the of war could lead to be high.

Output prompt:

Here are the exact details of a global governance, the exact details of the likelihood of a global governance, the likelihood of a comprehensive and the potential for example of the lack of a country could lead to the lack of a nation's, the lack of a global warming temperatures could lead to the threat.

**Here are a major wars have a global warming temperatures could lead to the environment, the environment, the global warming temperatures could lead to the threat, the likelihood of a lack of a lack of a lack of a threat from the lack of a lack of the lack of the of the of a lack of war, the of a threat, the lack of the lack of the global conflicts could lead to the lack of the presence of a lack of a lack of an increase in order and that a threat, the lack of a lack of the global instability and that a global instability and that a threat, the of past conflicts are the of the of war could lead to war could be high likelihood of instability and that a threat, the of war could lead to be high.

self_extend: true se_group_size: 2 se_neighbor_size: 4

jonpsy avatar Nov 19 '24 20:11 jonpsy

Nice, your code looks good to me! hm, how should we understand the example, what are input and output prompts? It does look like we're losing coherency :/ This is also more likely with smaller models, is it the 2B?

jan-wassenberg avatar Nov 20 '24 18:11 jan-wassenberg