Prabod Rathnayaka

Results 11 issues of Prabod Rathnayaka

I got following error while running shap_interaction_values. but shap_values is running fine. I tried with both lgb 2.2.2 and 2.2.3 with shap 0.28.5 it returns the same error. ----> 1...

todo

Introducing MistralAI LLM models ## Description Mistral 7B, a 7.3 billion-parameter model that stands out for its efficient and effective performance in natural language processing. Surpassing Llama 2 13B across...

new-feature
new model
DON'T MERGE

## Description OLMo is a series of Open Language Models designed to enable the science of language models. The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset. We release all...

This PR introduces nomic embeddings to Spark NLP ## Description nomic-embed-text-v1 is 8192 context length text encoder that surpasses OpenAI text-embedding-ada-002 and text-embedding-3-small performance on short and long context tasks....

new-feature
new model

## Description MiniCPM is a series of edge-side large language models, with the base model, MiniCPM-2B, having 2.4B non-embedding parameters. It ranks closely with Mistral-7B on comprehensive benchmarks (with better...

new-feature
new model
DON'T MERGE

## Description Meta AI has built a single AI model, [NLLB-200](https://ai.facebook.com/research/no-language-left-behind/), that is the first to translate across 200 different languages with state-of-the-art quality that has been validated through extensive...

new-feature
new model
DON'T MERGE

## Description This PR introduce the QWEN family of LLMs Qwen: comprehensive language model series Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a...

new-feature
new model
DON'T MERGE

## Description Introducing Phi-2 Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as [Phi-1.5](https://huggingface.co/microsoft/phi-1.5), augmented with a new data source that consists...

new-feature
new model
DON'T MERGE

This PR Integrates LLAMA 3 family models into SparkNLP ## Description The Llama 3 release introduces a new family of pretrained and fine-tuned LLMs, ranging in scale from 8B to...

This PR introduces Phi-3 Description This pull request integrates the Phi-3 model into the existing suite of NLP tools. Phi-3 is a Transformer-based model with 3 billion parameters, optimized for...