Skills vs Tools. Is it just packaging?
Every few months, there’s a new keyword that is buzzing AI town. Started with llms, then it was prompts, evals, agents, the MCP, A2A and most recently the new entrant: SKILLS.
First heard this floating around October when I was deep into the pit of legal stuff for my business, without bandwidth for the leisure of curiosity and I dismissed it as another hot word for tools: - an external tool calling interface to extend model capabilities.
The slowness of the New Year gifted me the privilege of reading and I found myself digging into announcements, repos and projects using Skills, was pleasantly surprised to find it is more than a rebrand of the MCP tool.
Yes, tools have been inherently flawed. Bloated contexts, models getting confused, a wholly separate scaffolding to make them WORK. From auth, to orchestration. The efforts we have put to make them work, only to have at max 4-5 tools that can be exposed at a time to a model, lest the possibility of model confusion!
We are yet to see the true protocol of communication for agentic systems. It mostly won’t be English even.
But Skills do take a good bite into the flaws of the simple tooling design introduced by MCP.
When a new member joins any organisation, we have DOCUMENTATION prepped for their onboarding. More the merrier. Less the more hallucinatory. Similar is the case with artificially intelligent team members.
But we have hands, they do not. Along with a brain, you need execution. And these two need to share brainspace. The brain, tuned for performing actions, not independent of each other.
Thus the introduction of skills.
Along the brain power of a model, you have now extended them with the power of true execution with instruction.
Provide instructions and now the model layer between the user and model can figure what to do, and do it with some deterministic execution runtime in a sandboxed environment.
With tools, models would ask the user to execute things for them and wait on responses of results. Now with skills,the model layer comes as a package with execution built in for tools. For the model layer to effectively execute, it is a prerequisite to be able to discover the possibilities of its execution instead of bombarding the model with an arsenal of tools that it could use, without fully understanding their relations or boundaries.
Thus, the skills folder and tree-like organisation with exploratory prompt formation couples with a skill execution runtime.
Is this just packaging or functionality? I would say the latter.
It enables me to be lazier. Create onboarding docs/scripts and start asking for things to be done instead of plumbing out the runtime and thinking thrice before exposing capabilities.
With Codex recently also pushing Skills into the hat of understood ‘languages’ by OpenAI ecosystem, I see a possible new pattern of achieving the meta goal of non-deterministically reliable systems: where known working patterns are baked in and the reasoning, execution is completely in the purview of a model that understands the task end to end, instead of just a single question/answer.
Bake in memory and self learning, and you are way closer to a system that mimics how humans attempt work.
What do you think?
What would be a good design to take this to production from local?
What about opensource models? Without such platform benefits, are they missing out?
Are there any differences at all between skills and tools?


