Posts

Showing posts from July, 2025

Asked Chat GPT - What is next for AI LLMS?

 I asked this: 1. Better Search and Retrieval (RAG) Holy Grail: Turn every piece of company knowledge into something easily searchable, understandable, and actionable. LLMs are increasingly being integrated with retrieval-augmented generation (RAG) systems, which combine a company's internal documents (PDFs, wiki pages, logs) with conversational LLMs. Goal: Create “company copilots” — tools that answer employee/customer questions using internal knowledge. 🧠 2. Enterprise-Grade Agents and Automation Holy Grail: Use AI agents to autonomously take action on behalf of employees or customers. Example use cases: Automate Tier-1 and Tier-2 support tickets Handle complex workflows in finance, HR, legal, IT Companies want agents that are safe, traceable, and reliable , not just “smart.” 💬 3. Natural Language Interfaces to Software Holy Grail: Let employees use natural language to interact with business systems (SQL databases, CRMs, analytics, etc.). Think: ...

Went down the AI LLM rabbit hole, asking questions. All responses from Chat GPT

 Here are responses from Chat GPT.  Just random questions and responses from chatgpt.com What's Already in the Model? When you ask me a question, I first try to answer it based on what I already "know." I was trained on a huge amount of data up until mid-2024, which includes: Wikipedia Gaming wikis (like Fextralife and others) Walkthroughs Forums (like Reddit, GameFAQs) Guides and FAQs So for a lot of questions—especially about older games like Dark Souls 1 (2011)—the answer is already in the model. That means I can often recall details like: Boss names Soul drops Recommended level ranges Item and spell descriptions Lore connections If that had been enough, I would’ve just given you the info directly. 🌐 Step 2: Using the Web (Real-Time Search) In your case, the question was very detailed—you wanted: All bosses & mini-bosses Who give 4,000+ souls Exact soul amounts Recommended player level to fight them Even though I remembered some of that from training, it wasn’t en...