Fast-LLM icon indicating copy to clipboard operation
Fast-LLM copied to clipboard

[hybrid dev] HF GDN and Kimi Linear layers for vllm benchmarking

Open oleksost opened this issue 3 months ago • 0 comments

✨ Description

Added Kimi linear and gated DeltaNet layers in order to create checkpoints for vllm throughout benchmarking. This required updating transformers to a newer version (4.57.1).

Inference with the newly added layers using transformers backend has not been tested.

🔍 Type of change

Select all that apply:

  • [ ] 🐛 Bug fix (non-breaking change that addresses a specific issue)
  • [x] 🚀 New feature (non-breaking change that adds functionality)
  • [ ] ⚠️ Breaking change (a change that could affect existing functionality)
  • [ ] 📈 Performance improvement/optimization (improves speed, memory usage, or efficiency)
  • [ ] 🛠️ Code refactor (non-functional changes that improve code readability, structure, etc.)
  • [ ] 📦 Dependency bump (updates dependencies, including Dockerfile or package changes)
  • [ ] 📝 Documentation change (updates documentation, including new content or typo fixes)
  • [ ] 🔧 Infrastructure/Build change (affects build process, CI/CD, or dependencies)

📝 Changes

List the key changes introduced in this PR:

  1. Change A
  2. Change B

✅ Checklist

Make sure the following tasks are completed before submitting the PR:

General

  • [ ] 📜 I have read and followed the contributing guidelines.
  • [ ] 🏷️ I am using a clear and descriptive PR title that summarizes the key change or feature introduced.
  • [ ] 🎉 The functionality is complete, and I have tested the changes.
  • [ ] 📝 I have updated the documentation if needed.
  • [ ] ⚠️ The change does not introduce any new issues (e.g., runtime warnings, type checker errors, linting problems, unhandled edge cases).
  • [ ] 🧩 I have commented my code, especially in hard-to-understand areas.

Dependencies and Configuration

  • [ ] 🐋 I have updated the Docker configuration or dependencies, if applicable.
  • [ ] 🔄 I have ensured compatibility with the existing setup after dependency changes.

Testing

  • [ ] 🧪 I have added or updated tests to cover my changes.
  • [ ] ✔️ New and existing tests pass locally with my changes.
  • [ ] 🚦 I have tested these changes on GPUs and verified training stability.
  • [ ] 🏋️ I have tested the changes on realistic training workloads, if applicable.

Performance Impact

  • [ ] 📊 I have run benchmarks where applicable to evaluate the performance impact.
  • [ ] ✅ The benchmarks show no performance regression.
  • [ ] 🚀 The benchmarks indicate a potential performance improvement.
  • [ ] ⚠️ The benchmarks indicate a potential performance degradation.
  • [ ] 📈 I have provided benchmark results and detailed any performance impact below, if applicable.

📊 Performance Impact Details

If there is any impact on performance, describe it and provide benchmark results, if applicable:


🗒️ Additional Notes

Include any additional context, information, or considerations here, such as known issues, follow-up tasks, or backward compatibility concerns.

oleksost avatar Nov 17 '25 19:11 oleksost