Rabu, 27 Agustus 2025

Llamacpp AI Chatbot WebUI

Posted by satria on 03.30 | No comments


🧠 Offline AI Chat & Coding GUI (Rakyat Edition)

🏆 "LLaMA.CPP Rakyat Edition: ChatGPT Experience in kilobytes." 🏆 "Your Own GPT Pro: Local, Open-Source & Fully Customizable" 💻 “AI Chatbot Offline untuk Semua: LlamaCPP GUI Rakyat Edition” “LlamaGPT WebUI Rakyat Deluxe”

🇮🇩 This project is primarily documented in Indonesian. 🇬🇧 English overview is provided below. “This project is based on the original LLaMA GUI by Satria Novian

📌 Link to buy Llamacpp AI Chatbot WebUI are below (Link untuk membeli Llamacpp AI Chatbot WebUI ada dibawah ini)
💸 Harga cuma Rp10.000 / Price is only $2

Lynk.id: http://lynk.id/satrianovian20/vz921epvxqex

Gumroad: https://satrianovian.gumroad.com/l/bpzai

🇮🇩 Bahasa Indonesia:
Pembuka:
“Tidak semua orang mampu bayar $20/bulan untuk ChatGPT Pro.
Kami hadirkan solusi sekali bayar murah, offline, ringan, jalan di PC mid-end, dan privasi 100% aman.”

Highlight fitur:

📌 Single Chat WebUI 

✨ Pengalaman mirip ChatGPT Pro, tapi offline & murah meriah!

📌 Chat seperti ChatGPT – tampilan mirip, role AI & user jelas, bisa edit, export, dan resend.
⚙️ Kontrol penuh – atur token, temperature, top_p, context size, sampai role AI (guru, penulis, coder).
🎨 Bisa diubah-ubah – GPT tema UI, hapus riwayat chat, ekspor chat ke txt dan docx.
📦 Mudah ganti model – muat/ganti model LLaMA.CPP sesuai kebutuhan.
🔒 100% privat – semua data cuma di PC kamu, tanpa internet.
💸 Sekali bayar → seumur hidup (nggak ada biaya langganan bulanan).


🇬🇧 English Version:
Opening:
“Not everyone can afford $20/month for ChatGPT Pro.
We offer a low-cost, one-time payment solution that's offline, lightweight, runs on mid-range PCs, and guarantees 100% privacy.”

Feature highlights:
📌 Single Chat WebUI

✨ ChatGPT Pro-like experience, but offline & affordable!

📌 Chat like ChatGPT – familiar UI, clear user/AI roles, edit, export, and resend options.
⚙️ Full control – adjust tokens, temperature, top_p, context size, and AI roles (teacher, writer, coder).
🎨 Customizable – GPT UI themes, clear chat history, export chat to txt and docx.
📦 Easy model management – load or switch LLaMA.CPP models anytime.
🔒 100% private – all data stays on your PC, no internet needed.
💸 One-time payment → lifetime access (no subscription).


📌 Dependency link to run llamacpp ai chatbot gui! (Link dependesi untuk menjalankan llamacpp ai chatbot gui!)
Internal File / File Internal: https://drive.google.com/file/d/1v0itlp6pbwyARNsdVnk5TxULhPpFF4RI/view?usp=drive_link
Llamacpp Build Releases: https://github.com/ggml-org/llama.cpp/releases
Model GGUF via Huggingface

📌 Panduan Instalasi / Installation Guide:
- Letakkan exe dan file internal di dalam folder Llamacpp Build Releases (Place the exe and internal files inside the Llamacpp Build Releases folder)

📸 Screenshot:




🎥 Video :



🇮🇩 Bahasa Indonesia:

✅ FAQ — Pertanyaan Umum (Trust Booster Edition) 

❓ GUI ini beneran bisa jalanin model 13B tanpa GPU? Ya! Sudah diuji langsung dengan model llama-2-13b-chat.Q4_K_M.gguf di sistem dengan:

💻 CPU: Intel i5-9400F (tanpa iGPU)

🧠 RAM: 16GB DDR4

⚙️ Backend: llama.cpp

📦 GUI: Llamacpp AI Chatbot GUI

❓ Bukti nyatanya mana? 📸 Screenshot saat load model dan idle sudah diunggah di folder docs/screenshots/

📄 Log lengkap sesi percobaan model 13B tersedia di docs/session-logs/

Tidak ada error, tidak crash, hanya delay wajar saat proses berat.

❓ GUI-nya berat gak? Tidak. GUI ini hanya 10KB, tanpa dependensi besar seperti Gradio atau Electron.

Tidak buka port aneh-aneh.

Tidak ada tracking.

Murni offline dan lokal.

UI sangat ringan, hanya berbasis tkinter.

❓ Bisa pakai model 7B, 8B, atau 13B lain? Bisa! Sudah diuji dengan:

Mistral 7B

DeepSeek Coder 6.7B

DeepSeek Coder 7B

Nous Hermes 13B (Q4_K_M)

LLaMA 13B (Q4_K_M)

❓ RAM saya cuma 8GB, bisa jalan? Bisa, asal model yang dipilih sesuai. Gunakan model kecil seperti:

TinyLlama 1.1B Q8_0

DeepSeek Coder 1.3b Q8_0

Mistral 4B Q8_0

Open Hermes 7B Q4_K_M

Atur max_tokens di GUI agar tidak melebihi kapasitas RAM kamu.

❓“Saya masih nggak percaya GUI ini bisa jalanin model 13B cuma dengan RAM 16GB. Beneran bisa?” 💬 “Coba sendiri aja bro 😎”

❓“Emang GUI-nya ringan banget ya?” ✅ Iya. Ukuran file .py cuma mb. Gak ada embel-embel web server, backend rumit, atau library berat.

❓“Bisa crash gak pas load model besar?” 🚫 Selama sistem kamu stabil dan swap file aktif, hampir nggak pernah crash. Bahkan log menunjukkan performa tetap normal walau RAM di atas 15GB pas awal load.

❓“Ada buktinya?” 📸 Sudah ada screenshot dan log di folder docs/session-logs/ dan docs/screenshots/.

❓“Kalau saya nggak percaya tetap?” 😎 Silakan buktikan sendiri.


🇬🇧 English Version:

✅ FAQ — Frequently Asked Questions (Trust Booster Edition) 

❓ Can this GUI really run a 13B model without a GPU? ✅ Yes! Successfully tested with llama-2-13b-chat.Q4_K_M.gguf on:

💻 CPU: Intel i5-9400F (no iGPU)

🧠 RAM: 16GB DDR4

⚙️ Backend: llama.cpp

📦 GUI: Llamacpp AI Chatbot GUI

❓ Where’s the real proof? 📸 Screenshots during model load and idle are uploaded to docs/screenshots/ 📄 Complete 13B model session logs available in docs/session-logs/ ✅ No errors, no crashes. Just slight delay under heavy processing — perfectly normal.

❓ Is this GUI heavy? ❌ Not at all. It’s just 10KB. No bloated dependencies like Gradio or Electron. ✔️ No random ports. No tracking. ✔️ 100% offline and local. ✔️ Based purely on Tkinter.

❓ Can I use other 7B, 8B, or 13B models? ✅ Absolutely! Already tested with:

Mistral 7B

DeepSeek Coder 6.7B

DeepSeek Coder 7B

Nous Hermes 13B (Q4_K_M)

LLaMA 13B (Q4_K_M)

❓ I only have 8GB RAM, will it work? ✅ Yes, just use smaller models like:

TinyLlama 1.1B Q8_0

DeepSeek Coder 1.3b Q8_0

Mistral 4B Q8_0

Open Hermes 7B Q4_K_M

🛠️ Set max_tokens low to match your available RAM in the GUI settings.

❓ “I still don’t believe this GUI can run 13B on just 16GB RAM. Really?” 💬 “Try it yourself, bro. 😎”

❓ “Is the GUI really that lightweight?” ✅ Yep. File size is only mb. No web servers, no complex backends, no heavy libraries.

❓ “Will it crash when loading large models?” 🚫 As long as your system is stable and swap file is active, crashes are extremely rare. 📊 Even with RAM above 15GB during model load, logs show stable performance.

❓ “Is there actual proof?” 📸 Yes. Screenshots and logs are available in the docs/session-logs/ and docs/screenshots/ folders.

❓ “What if I still don’t believe?” 😎 Feel free to test it yourself. 


 

0 komentar:

Posting Komentar