Bao Nguyen

Results 24 comments of Bao Nguyen

Hi all, Will we have this fix very soon?

I think this is because of this line https://github.com/Leaflet/Leaflet.heat/blob/gh-pages/src/HeatLayer.js#L141, leaflet heatmap will recalculate the heat value based on the zoom level which i think it is not the right behavior.

I found the issue: if we don't supply the ssl context, it will use the default ssl context, which use h2 by default: https://github.com/python-hyper/hyper/blob/bc0738bdf7afdc1236e8154868352b58850fdf1a/hyper/tls.py#L114 The only way to force http/1.1...

There is an attempt to port to react here https://www.npmjs.com/package/react-peity

6 years later.... i guess it never comes

I have the same problem with `load_qa_chain` ``` chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce") # chain({"input_documents": docs, "question": query,}) chain.run(input_documents=docs, question=query, token_max=2000) ``` Where does the limit come from?

It is because of Japanese. You set max_tokens in the `LLM` object like below ``` llm = OpenAI( temperature=0, max_tokens=self.max_tokens, openai_api_key=self.openai_api_key, batch_size=2, ) ```

Do you know if we have a page somewhere i can put this document to? so that someone can have the answer next time?