Context is everything with LLMs and AI assistants
This might sound obvious to some, and maybe others are a bit late to the party, but one thing you have to take into account is this: context changes everything when working with LLMs and AI assistants.
There is a massive difference between using ChatGPT, Gemini, Copilot, or any other LLM in isolation and using them on top of the actual context of the task you are working on.
The friction of the "blank page"
When you open ChatGPT on a blank page, you have to explain everything. What project you’re working on. What the task is about. What decisions were already made. What constraints exist. That takes time, and it’s easy to miss details.
Now compare that with using an AI assistant embedded directly where the work happens.
Moving AI to where the work happens
For example, if you open ChatGPT from inside a Linear or Jira issue, the model already has all the relevant context. It knows the title, the description, the comments, the status, and the discussion so far. If the issue is about developing a new feature, you don’t need to explain what you’re working on. You can jump straight to asking useful questions or generating something concrete.
The same applies to email. If you open an AI assistant with the full email thread already loaded, including the original message and all replies, you can ask it to draft a response or simply ask questions about what’s being discussed.
A real-world example: translating complexity
The other day I received an email from our accountant explaining a legal requirement we needed to fill out. I’m not an accountant and I’m not a finance person, so I barely understood what he was talking about.
Instead of replying with more questions and probably getting another answer I wouldn’t fully understand either, I opened Gemini with the email as context and started asking questions there. Gemini explained, step by step and in plain language, what the requirement was and what it meant for us.
The key point: I didn’t have to copy-paste anything or explain the situation. The context was already there, so the answers were immediately useful.
Turning static information into live context
This is where things get really interesting for companies adopting AI. If your information is properly organised and accessible (in Google Drive, Notion, Jira, Linear, Confluence, whatever) then sooner rather than later that information becomes live context for AI assistants. And once that happens, the value you get from AI increases dramatically.
We are already seeing this with Google Workspace. Because we use Google Drive, all our documents can be used as context by Gemini. We’ve tested it, and it works surprisingly well. You can ask things like:
- Which other projects have used a specific technology?
- What do company policies say about a certain topic?
- Have we written a similar sales proposal in the past?
If the information exists and is well organised, the answers are there.
The value for engineering teams
The same logic applies to engineering workflows. If engineers document what they are doing in issues and comments, explain blockers, decisions, and progress, that information becomes extremely valuable. Not just for humans like project managers or other engineers, but also for AI assistants sitting on top of that data.
Conclusion: documentation is the new AI strategy
The conclusion is simple. Context is everything. But context only exists if the information is written down, organised, and kept up to date. Companies that understand this and invest in documentation and organisation are the ones that will get the most out of AI.