Feature Request: Consistent Markdown Rendering for LLM Responses
First off—this package is fantastic. The output quality is impressive, and the integration feels smooth. However, I’ve noticed some inconsistencies when trying to render responses in Markdown format, especially compared to how Meta AI handles it.
🧩 Problem
When explicitly prompting the model to respond in Markdown (e.g., “Need the complete answer as <Markdown-code>”), it works well for some queries. But for more complex questions—like:
Explain the transformer architecture and the equations involved in each step.
…the output tends to be partially formatted. Some sections are correctly rendered in Markdown, while others are lumped into a single code block, making it harder to parse or render cleanly.
Equations, in particular, are often returned in raw Markdown or LaTeX-like syntax, but not consistently wrapped or structured for proper rendering.
📷 Example
Here’s a screenshot illustrating the formatting issue:
✅ Feature Request
It would be incredibly helpful if the package offered an option to:
- Enforce consistent Markdown formatting across the entire response
- Properly segment code blocks, equations, and text
- Optionally return structured formats (e.g., Markdown, HTML, JSON) for easier rendering downstream
This would make it much easier to integrate the output into front-end components or documentation pipelines.
Thanks again for the great work :)
the response actually received is a long chain of json data i.e json chunks . the data itself is a text with new line character . so you have to use that instead which is a streamed response . i have tried to format the chunks internally but the response itself is a text with newline. let me check whether an endpoint exists where it provides output as when using llama3 or 4 which is formatted as markdown.