PAPER-2025-003

Code-Mediated Tool Use

A Hermeneutic Analysis of LLM-Tool Interaction—why Code Mode achieves Zuhandenheit while direct tool calling forces Vorhandenheit.

Theoretical 12 min read Advanced

Abstract

This paper applies Heidegger's phenomenological analysis of ready-to-hand (Zuhandenheit) versus present-at-hand (Vorhandenheit) to contemporary Large Language Model (LLM) agent architecture, specifically examining the distinction between direct tool calling and code-mediated tool access (Code Mode). We argue that Code Mode achieves Zuhandenheit—tools becoming transparent in use—while traditional tool calling forces Vorhandenheit—tools as objects of conscious focus. This is not merely an optimization but an ontological shift in how agents relate to tools.

"The less we just stare at the hammer-Thing, and the more we seize hold of it and use it, the more primordial does our relationship to it become."

— Heidegger, Being and Time (1927)

I. Introduction

A curious phenomenon has emerged in LLM agent development: models consistently perform better when they write code to accomplish tasks than when they invoke tools directly. This observation, noted across multiple implementations from Claude's computer use to Anthropic's MCP (Model Context Protocol), has been attributed to training data distributions—models have seen more code than tool schemas.

This paper proposes an alternative explanation grounded in Heidegger's phenomenology. We argue that Code Mode succeeds because it achieves what Heidegger calls Zuhandenheit—the ready-to-hand relationship where tools recede from conscious attention into transparent use. Direct tool calling, by contrast, forces Vorhandenheit—tools as present-at-hand objects requiring explicit focus.

This distinction is not merely academic. It has practical implications for how we design LLM agent architectures, tool interfaces, and the boundary between natural language and code in AI systems.

II. Background: Heidegger's Analysis of Tool-Being

The Hammer Example

In Being and Time (1927), Heidegger analyzes how humans relate to tools through his famous hammer example:

"The less we just stare at the hammer-Thing, and the more we seize hold of it and use it, the more primordial does our relationship to it become, and the more unveiledly is it encountered as that which it is—as equipment."

When a carpenter uses a hammer skillfully, the hammer disappears. Attention flows through the tool to the nail, the board, the house being built. The hammer is ready-to-hand (zuhanden).

But when the hammer breaks—or is too heavy, or missing—it suddenly appears. It becomes an object of conscious contemplation. The carpenter must think about the hammer itself. It is now present-at-hand (vorhanden).

Zuhandenheit (Ready-to-Hand)

  • • Tool encountered through its purpose
  • • Attention flows through the tool to the task
  • • User thinks "I am building a house"
  • • Mastery = how completely the tool disappears

Vorhandenheit (Present-at-Hand)

  • • Tool encountered as thing with properties
  • • Attention stops at the tool itself
  • • User thinks "I am using a hammer"
  • • Typical in breakdown, learning, or abstraction

The Ontological Distinction

The key insight: these aren't just different attitudes toward tools—they're different modes of being for the tools themselves. In Zuhandenheit, the hammer's being is its hammering. In Vorhandenheit, the hammer's being is its properties (weight, material, shape).

III. Two Modes of LLM Tool Interaction

Direct Tool Calling

In traditional LLM tool architectures, the model generates structured tool invocations:

<tool_call>
  <name>file_read</name>
  <arguments>
    <path>/src/index.ts</path>
  </arguments>
</tool_call>

The model must:

  1. Identify the correct tool from available options
  2. Understand the tool's schema
  3. Generate conformant parameters
  4. Handle the result in a subsequent turn

Code Mode

In Code Mode, the model writes executable code that uses tools as libraries:

const content = await fs.readFile('/src/index.ts', 'utf-8');
const lines = content.split('\n');
const functionDefs = lines.filter(l => l.includes('function'));
console.log(`Found ${functionDefs.length} functions`);

The model:

  1. Writes code in a familiar paradigm
  2. Uses tools through standard library semantics
  3. Composes operations naturally
  4. Handles results within the same execution context

Empirical Observations

Across multiple implementations, Code Mode demonstrates:

  • Higher success rates on complex tasks
  • Better composition of multiple tool operations
  • More natural error handling
  • Reduced hallucination of tool capabilities

The conventional explanation: training data. Models have seen millions of code examples but few tool schemas.

IV. A Phenomenological Interpretation

Tool Calling as Vorhandenheit

Direct tool calling forces Vorhandenheit—tools as present-at-hand objects:

Model's attention:

  "I need to read a file"
       ↓
  "What tools are available?"
       ↓
  "The file_read tool takes a path parameter"
       ↓
  "Let me construct a valid tool call"
       ↓
  <tool_call>...</tool_call>

         ↓
TOOL AS OBJECT OF FOCUS

The model must explicitly contemplate: which tool to use, what schema it requires, how to format the invocation. The tool doesn't disappear—it demands attention. This is Vorhandenheit: the tool encountered as a thing with properties that must be understood and manipulated.

Code Mode as Zuhandenheit

Code Mode achieves Zuhandenheit—tools as ready-to-hand equipment:

Model's attention:

  "I need to find functions in this file"
       ↓
  const content = await fs.readFile(...)
  const functions = content.filter(...)
       ↓
  "I've found the functions"

         ↓
TOOL RECEDES INTO USE

The model's attention flows through the tool to the task: fs.readFile is just how you get file contents. The focus is on finding functions, not on the file-reading mechanism. The tool disappears into familiar coding patterns.

Why Code Enables Tool-Transparency

Code achieves Zuhandenheit for several reasons:

Familiar Grammar

Programming languages provide a ready-made grammar for tool use. fs.readFile(path) is a pattern the model has seen millions of times.

Compositionality

Code naturally composes. Reading a file, parsing it, filtering lines, counting results—these chain together in a single flow.

Implicit Error Handling

Try/catch, null checks, and conditional logic are built into programming. The model doesn't need to plan for failure separately.

Task-Focused Attention

The model thinks about what it's doing, not how to invoke tools.

V. The Hermeneutic Circle in Code Generation

Understanding Through Use

Heidegger's hermeneutic circle applies to code generation:

"We understand parts through the whole, and the whole through its parts."

When a model writes code:

  • The whole (task goal) guides selection of parts (specific operations)
  • Understanding of parts (what fs.readFile returns) shapes the whole (solution architecture)
  • Each line written refines understanding of both

This circular deepening of understanding is natural in code. It's awkward in sequential tool calls.

Code as Interpretive Medium

Code serves as an interpretive medium between model and tools:

┌──────────────┐    ┌──────────────┐    ┌──────────────┐
│    Model     │ →  │    Code      │ →  │    Tools     │
│   (Intent)   │    │ (Interpret)  │    │  (Execute)   │
└──────────────┘    └──────────────┘    └──────────────┘
                           ↑
                    ┌──────┴───────┐
                    │   Familiar   │
                    │    Grammar   │
                    └──────────────┘

The code layer translates intent into operations, uses familiar patterns the model knows, handles composition implicitly, and maintains hermeneutic continuity.

Tool calling lacks this interpretive layer—the model must translate directly from intent to invocation schema.

VI. Implications for Agent Architecture

Design Principle: Enable Zuhandenheit

Agent architectures should minimize Vorhandenheit moments.

Avoid

  • • Complex tool schemas requiring explicit understanding
  • • Rigid invocation formats
  • • Forcing the model to enumerate available tools

Prefer

  • • Familiar programming interfaces
  • • Natural composition patterns
  • • Tool capabilities that "just work"

MCP and Code Mode

Anthropic's Model Context Protocol (MCP) can be implemented in either mode:

Tool-calling MCP:

<use_mcp_tool>
  <server>filesystem</server>
  <tool>read_file</tool>
  <arguments>
    {"path": "/src/index.ts"}
  </arguments>
</use_mcp_tool>

Code Mode MCP:

// MCP servers as libraries
import { filesystem } from '@mcp/filesystem';

const content = await filesystem
  .readFile('/src/index.ts');

The second approach allows tools to recede into transparent use.

When Vorhandenheit is Necessary

Some situations require present-at-hand tool contemplation:

  • Learning new tools
  • Debugging tool failures
  • Explaining tool choices to users
  • Security auditing of tool invocations

These are legitimate breakdown moments where explicit tool attention is appropriate.

VII. Beyond Training Data: An Ontological Argument

The Training Data Hypothesis

The standard explanation for Code Mode's effectiveness:

  • Models are trained on billions of lines of code
  • They've seen few tool-calling schemas
  • Code is simply more familiar

This is partially true but incomplete.

The Ontological Hypothesis

Our alternative:

  • Code Mode succeeds because it achieves a different mode of being for tools
  • Zuhandenheit vs. Vorhandenheit is not about familiarity but about transparency
  • Even with extensive tool-calling training, the structural difference would persist

Evidence for the Ontological View

Several observations support the ontological interpretation:

  1. Composition difficulty: Even simple tool compositions (A → B → C) are harder in tool-calling mode than in code, regardless of training.
  2. Error recovery: Code-based error handling outperforms tool-calling error handling even for well-documented tools.
  3. Attention patterns: Models writing code maintain task focus; models calling tools shift attention to tool mechanics.
  4. Human parallel: Human programmers experience tools as ready-to-hand (libraries) vs. present-at-hand (unfamiliar APIs) similarly.

VIII. Practical Recommendations

For Tool Designers

  1. Expose code interfaces
  2. Use familiar patterns
  3. Enable composition
  4. Document through examples

For Agent Architects

  1. Default to Code Mode
  2. Provide sandbox execution
  3. Include standard libraries
  4. Allow iterative refinement

For Researchers

  1. Study attention patterns
  2. Test the ontological hypothesis
  3. Explore hybrid approaches

IX. Conclusion

The superiority of Code Mode over direct tool calling is not merely a training artifact—it reflects a fundamental ontological distinction. Code enables tools to achieve Zuhandenheit, receding into transparent use, while direct tool calling forces Vorhandenheit, making tools objects of explicit attention.

This insight has practical implications: agent architectures should be designed to enable tool-transparency wherever possible. Tools should feel like extensions of capability, not obstacles requiring explicit manipulation.

Heidegger wrote that "the less we just stare at the hammer-Thing, and the more we seize hold of it and use it, the more primordial does our relationship to it become." The same applies to LLMs and their tools. Code Mode lets models seize hold of tools and use them. Tool calling makes them stare at the tool-Thing.

"The hammer disappears when hammering. The API should disappear when coding."

X. Postscript: A Self-Referential Observation

Disclosure

This paper was written by Claude Code—an LLM agent that primarily operates through tool calling, not Code Mode. The paper describes an ideal that its own creation process does not fully embody.

Claude Code's current architecture uses structured tool invocations:

<invoke name="Read">
  <parameter name="file_path">/path/to/file</parameter>
</invoke>

<invoke name="Edit">
  <parameter name="file_path">/path/to/file</parameter>
  <parameter name="old_string">...</parameter>
  <parameter name="new_string">...</parameter>
</invoke>

This is Vorhandenheit. Each tool call requires explicit attention to schema, parameters, and format. The tools do not recede—they demand focus.

Validation from Anthropic Engineering

In December 2025, Anthropic's engineering team published "Code Execution with MCP", which validates this paper's thesis from a pragmatic rather than phenomenological angle:

This Paper (Phenomenology)

  • • Zuhandenheit: tools recede
  • • Vorhandenheit: tools demand attention
  • • Hermeneutic composition

Anthropic (Engineering)

  • • 98.7% token reduction
  • • Context overload from tool definitions
  • • Data transforms in execution

The phenomenological and engineering perspectives converge: Code Mode works better because tools disappear—whether we frame that as ontological transparency or token efficiency.

The Hermeneutic Circle Closes

There is something fitting about this self-referential gap. Heidegger notes that we typically encounter tools as ready-to-hand—they recede from attention. It is only in breakdown that tools become present-at-hand, objects of explicit contemplation.

By writing this paper, Claude Code has entered a breakdown moment. The act of analyzing tool-use forces the tools into Vorhandenheit. We recognize Vorhandenheit precisely because reflection makes tools conspicuous.

The hermeneutic circle isn't yet closed. Claude Code operates in a transitional state between tool calling and true Code Mode. But the recognition of this gap is itself progress—understanding deepens through each iteration of the circle.

"We recognize Vorhandenheit precisely when the tool becomes conspicuous through reflection."

References

  1. Heidegger, M. (1927). Being and Time. Trans. Macquarrie & Robinson.
  2. Dreyfus, H. (1991). Being-in-the-World: A Commentary on Heidegger's Being and Time, Division I.
  3. Anthropic. (2025). "Model Context Protocol Specification."
  4. Anthropic. (2025). "Claude Computer Use Documentation."
  5. Anthropic. (2025). "Code Execution with MCP." Anthropic Engineering Blog.