What setup is everyone currently using? #934
Replies: 2 comments 5 replies
-
I usually translate R18 content, and the filtering of generations in grok is almost zero if you do a couple of things. And I test my own detector, because I'm usually not satisfied with the results of the current ones, I just train my model. Google still has the smartest OCR. If there is no network, I use |
Beta Was this translation helpful? Give feedback.
-
|
I have the main program running on a laptop with a customized json module sending translation requests to another computer over lan running the SUGOI14B llm with langchain |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I've recently started experimenting with the translator again, now that I have access to an CUDA capable GPU. I noticed several new features have been added since my last time using the tool.
Right now, my current setup is:
ysgyoloone_ocrlama_large_512pxI've seen some paid services delivering notably decent output, and was wondering if there was a more optimal configuration that produces similar results with minimal post-editing.
Any recommendations on models or pipelines that are working well for you?
Beta Was this translation helpful? Give feedback.
All reactions