Luftfahrttechnik

Kategorie: Verkehr: Luftfahrttechnik:


http://www.asg-weinheim.de/
Eintrag vom: 04.09.2013.



I'm using ollama to run my models. I want to use the mistral model but create a lora to act as an assistant that primarily references data I've supplied during training. This data will include things like test procedures diagnostics help and general process flows for what to do in different scenarios.
https://www.reddit.com/r/LocalLLaMA/comments/18mxuq0/training_a_model_with_my_own_data/
 REDDIT


For me Ollama provides basically three benefits: Working with sensitive data. I'm working in the bank and being able to use LLM for data processing without exposing the data to any third-parties is the only way to do it. Ollama (and basically any other LLM) doesn't let the data I'm processing leaving my computer. Censorship.
https://www.reddit.com/r/ollama/comments/1alu7p3/why_should_i_use_ollama_when_there_is_chatgpt_and/
 REDDIT


Hey guys I am mainly using my models using Ollama and I am looking for suggestions when it comes to uncensored models that I can use with it. Since there are a lot already I feel a bit overwhelmed. For me the perfect model would have the following properties
https://www.reddit.com/r/LocalLLaMA/comments/1d9amxf/what_is_the_best_small_4b14b_uncensored_model_you/
 REDDIT


I'm currently downloading Mixtral 8x22b via torrent. Until now I've always ran ollama run somemodel:xb (or pull). So once those >200GB of glorious?
https://www.reddit.com/r/ollama/comments/1c18kr2/how_to_manually_install_a_model/
 REDDIT


Ok so ollama doesn't Have a stop or exit command. We have to manually kill the process. And this is not very useful especially because the server respawns immediately. So there should be a stop command as well. Edit: yes I know and use these commands. But these are all system commands which vary from OS to OS. I am talking about a single command.
https://www.reddit.com/r/ollama/comments/1arbbe0/request_for_stop_command_for_ollama_server/
 REDDIT


I've just installed Ollama in my system and chatted with it a little. Unfortunately the response time is very slow even for lightweight models like?
https://www.reddit.com/r/ollama/comments/1b35im0/ollama_gpu_support/
 REDDIT


How to make Ollama faster with an integrated GPU? I decided to try out ollama after watching a youtube video. The ability to run LLMs locally and which could give output faster amused me. But after setting it up in my debian I was pretty disappointed. I downloaded the codellama model to test. I asked it to write a cpp function to find prime ...
https://www.reddit.com/r/ollama/comments/1b9hx3w/how_to_make_ollama_faster_with_an_integrated_gpu/
 REDDIT


Yes I was able to run it on a RPi. Ollama works great. Mistral and some of the smaller models work. Llava takes a bit of time but works. For text to speech you?ll have to run an API from eleveabs for example. I haven?t found a fast text to speech speech to text that?s fully open source yet. If you find one please keep us in the loop.
https://www.reddit.com/r/robotics/comments/1byzeie/local_ollama_text_to_speech/
 REDDIT


To get rid of the model I needed on install Ollama again and then run "ollama rm llama2". It should be transparent where it installs - so I can remove it later.
https://www.reddit.com/r/ollama/comments/193kscz/how_to_uninstall_models/
 REDDIT


Stop ollama from running in GPU I need to run ollama and whisper simultaneously. As I have only 4GB of VRAM I am thinking of running whisper in GPU and ollama in CPU. How do I force ollama to stop using GPU and only use CPU. Alternatively is there any way to force ollama to not use VRAM?
https://www.reddit.com/r/ollama/
 REDDIT