Sathya R

Results 4 issues of Sathya R

I can successfully export the seq2seq library based model and use it in Tensorflow serving. To mention, when it is successfully working, I have **turned off** beam_search by supplying, `FLAGS_model_params["inference.beam_search.beam_width"]=0`...

Feature description --------- Proposal to get TP, FP, FN of all targets: At a [certain point](https://github.com/chakki-works/seqeval/blob/2921931184a98aff0dbbda5ff943214fe50a7847/seqeval/metrics/v1.py#L134), we have three different arrays for all the targets, `tp_sum`, `pred_sum`, `true_sum` **e.g.,** ```...

To do `store.search()`, the embedding is needed. For that, need to pass embedding configs here, https://github.com/langchain-ai/langgraph/blob/d73902ae76f106525ef33aa8bfd66b65a93496b3/libs/checkpoint-postgres/langgraph/store/postgres/aio.py#L184

* postgres may accept PostgresIndexConfig which is a sub-class of IndexConfig