Think about how you use LLMs. Yesterday, today, tomorrow.
How you use LLMs will change radically in the coming months. In designing your strategy, "shoot ahead of the clay."
Think about how you use LLMs, today.
First time, you used a LLM you wrote a prompt and got a response. And you accepted that as the output. A monologue.
Then, you got used to the idea that this wasn't a search results page. That whilst on the face of it, "the answer" had arrived, it hadn't. It wasn't quite perfect. So you went back to Google, asked some questions, got a specific part rewritten, and rewrote the final answer outside of the LLM . Iteration.
From what we've seen, most people and LLMs are at this point right now.
So what do you think we might see develop in 2024?
Here's one idea of evolution. Please debate and add your ideas in the comments.
Self-reflection. Your LLM looks back at other recent prompts and iterations you made, and places your most recent request within the context of your workflow. It might ask you, "does this prompt relate to this one from Monday?" And take that into account.
Two LLMs are better than one. So you prompt one who then sets up a second LLM they feel they can debate your prompt with (because they are smarter than you). Who says a Gemini needs another Gemini (apart from by definition). Maybe your LLM realises that Claude or Cohere can help. Maybe it has beta access to GPT-5. These LLMs take different points of view, and collab. Maybe a better result than you iterating with one LLM, but no reason why you can't join the twosome.
Of course this learning style sets computers apart from humans. We say we learn from our mistakes. A networked "brain" can learn from the mistakes of its network. Connect 500 million computers, learn 500 million times, in the one brain.
If you didn't already get that, re-read the last para ...
Mentioning Cohere leads into the next option. Cohere specialises in using your company's data as the source of truth. Using a generic LLM but asking it a specific question, maybe your generic LLM relays parts of your prompt to a specialist AI tool with deep domain expertise. Now your LLM is "prompting" other LLMs. Multi-iterative.
Let's take that further. Your LLM becomes Your personal agent, "just for You" in that monogamous but not-monogamous kind of way. It is primarily responsible for learning You. How You like Your information - depth, accuracy, appendices, visual or text (in a multi-modal world). Tailoring for what You need it for, and how You need it. But Your LLM isn't primarily responsible for accessing the data required. It outsources ALL the tasks to other specialist agents. It gets other agents to assess the responses for accuracy and fit. The LLM workflow becomes a "Conversation of Agents". And then your LLM translates that into the answer it knows You'll find most useful.
At this point, You could speak (typing a prompt was 'so last month') and over 10 million specialist AI agents could instantly work on, debate, iterate, and present the most optimised tailored answer back to You, in real time.
Hollywood makes us think of AGI as one computer "brain", but will it be?