xiasummer
xiasummer
我是discourse的用户,使用过程中发现这里的很多词语无法被搜索出来。而且这些词语看起来都具有一定的政治上的敏感性,换句话说敏感词。 特意问了一下discourse到底使用的是什么分词工具,看到论坛上有人的回答是,应该是jieba工具。 不知道是不是真的使用的咱们的工具。如果是的话,那么我觉得就值得说一下——咱们做的毕竟是基础工具,敏感词不是咱们应该考虑过滤的——后面的人可以考虑分析或者不分析,但是咱们这个“基础”的分词器应该能够做到完全功能。 ref https://meta.discourse.org/t/whats-the-word-tokenizer-for-different-languages-in-discourse/152893/2 https://meta.discourse.org/t/whats-the-word-tokenizer-for-different-languages-in-discourse/152893/2
I think you should email them to talk about such api. Because nowadays there are fewer and fewer web users use big big computers, they use mobile device like the...
When I used `tinytex::xelatex('./my_test.tex')` to compile my test file, I found that everything is good, except for the bibtex part. Seems to be file not exist, but using other tools...
我不是latex开发人员,不过可以保证的是现在这个模板无法通过texstudio一键编译过去。 后在网友帮助下发现是.sty文件是在texlive 2019中更新的,这就没办法了。
I usually write a lot of formulas in discourse. But I find it very slow when I have added too many formulas. If I can choose, I'd like to stop...
1. I see this in your post on CommonMark. I find it very powerful and supports align and other important latex formulas. https://talk.commonmark.org/t/mathjax-extension-for-latex-equations/698 I'm now using `rux-pizza/discourse-mathjax` and it does...