Hacker Newsnew | past | comments | ask | show | jobs | submit | dnaranjo's commentslogin

This is a thoughtful stack. A few observations and questions from someone who's been building with similar tooling.

The ast-grep + ripgrep combination for semantic context is the right architectural choice. Pure embedding-based retrieval tends to fail on codebases with non-trivial inheritance hierarchies or polymorphism, where structural search beats semantic similarity. I'd be curious how you're balancing the two: does ast-grep run first as a structural filter, with ripgrep for content matching, or are they used independently depending on the query type?

On the multi-provider abstraction: Anthropic, OpenAI, and Gemini have meaningfully different tool-calling schemas, and Codex (the CLI tool) adds another layer because it wraps OpenAI's API but with its own conventions. How are you handling the schema translation? Most "multi-provider" implementations I've seen end up with provider-specific code paths that defeat the abstraction.

ACP support is interesting. I haven't seen many agents implement it yet, mostly MCP. Is your read that ACP is going to gain adoption, or is including both more about hedging?

The local inference angle (LM Studio, Ollama) matters for use cases where source code can't leave the network. Have you benchmarked which open models hold up reasonably for tool-calling-heavy workflows? In my experience most local models below 70B struggle with multi-turn tool use even when their raw code generation is decent.

Rust + Ratatui is a strong DX choice. Will check out the DeepWiki.


Thank you for checking out VT Code and great questions! Somehow I am only able to get the comment now (4 hours late), sorry!

I will answer your questions

> does ast-grep run first as a structural filter, with ripgrep for content matching, or are they used independently depending on the query type?

ast-grep and ripgrep are used independently depending on the query type, not in a sequential filtering relationship. In VT Code's, the `unified_search` tool provides two distinct search actions/args: `grep` and `structural`. Where, `grep` action uses ripgrep for broad text matching and quick file-content sweeps, which calls system `rg` binary or falls back to default grep (I also use my own perg, which is my cli for grep). The `structural` action uses ast-grep (https://ast-grep.github.io/) for syntax-aware search, read-only project scans, and rule tests, it directly executes ast-grep binary without any ripgrep preprocessing.

The routing is based on your search needs, design based on semantic:

- Plain text search -> `grep` action (ripgrep)

- Syntax-aware search -> `structural` action (ast-grep)

I design VT Code's system prompt to explicitly states to prefer `grep` (rg) for broad text search and prefer `structural` (ast-grep) for syntax-sensitive search.

> How are you handling the schema translation?

VT Code handles schema translation through a unified abstraction layer with provider-specific translation methods, avoiding provider-specific code paths in the main application logic. (https://github.com/vinhnx/vtcode/blob/main/vtcode-core/src/l...) The system uses a unified `LLMProvider` trait with shared request/response types, then handles provider-specific translations at the boundary layer.

Different providers have varying role support, handled through the `MessageRole` enum with provider-specific string conversion methods:

- OpenAI: Supports `system`, `user`, `assistant`, `tool` roles directly. Based on OpenAI Completion/Response API roles definition.

- Anthropic: Converts tool responses to `user` messages, handles system messages separately.

- Gemini: Maps `assistant` to `model`, handles system messages as `systemInstruction`.

Each provider implements the `LLMProvider` trait and handles its own schema conversion internally. For example, the Anthropic provider converts requests in `convert_to_anthropic_format()`.

> ACP support is interesting. I haven't seen many agents implement it yet, mostly MCP. Is your read that ACP is going to gain adoption, or is including both more about hedging?

I support and adopt ACP since it was first announced and introduced from Zed. And the currently VT Code run with almost all ACP-compatible clients (Zed, Minamo Notebook, Jetbrain, Toad...). See: https://github.com/vinhnx/VTCode/blob/main/docs/guides/zed-a.... I'm not sure about ACP adoption metric but ACP integration helps VT Code run on more surfaces (eg: environments).

> The local inference angle (LM Studio, Ollama) matters for use cases where source code can't leave the network. Have you benchmarked which open models hold up reasonably for tool-calling-heavy workflows? In my experience most local models below 70B struggle with multi-turn tool use even when their raw code generation is decent.

Currently, local inference support via LM Studio and Ollama is still early and experiment in VT Code and I haven't run it much and yet to have benchmarks with open models since I don't have enough VRAM. But it definitely in my checklist and could get helps from the comminity if anyone interest.

Thank you!


  Author here. Started building this in January. The protocol went public
  on GitHub in February; the companion paper landed on Zenodo in March
  (https://doi.org/10.5281/zenodo.19040913). Today the whole stack is
  finally installable from every registry — 17 packages across npm, PyPI,
  crates.io, and Go modules.

  The thesis: agents fail from missing infrastructure, not missing
  intelligence. Human societies didn't coordinate by perfecting individuals —
  they invented institutions. DCP-AI is an attempt at that scaffolding for
  autonomous AI: cryptographic identity bound to a responsible human,
  machine-readable intents with policy gating, tamper-evident audit chains,
  a formal lifecycle/succession/rights framework, and an agent-to-agent
  arbitration protocol.

  Hybrid post-quantum crypto (Ed25519 + ML-DSA-65 composite, NIST FIPS
  203/204/205 conformant). Apache-2.0.

  Quick things to try:
  - Interactive playground: https://dcp-ai.org/playground/
  - npm create @dcp-ai/langchain my-app (or /crewai, /openai, /express)
  - Docs: https://docs.dcp-ai.org

  Happy to discuss design choices. Most interested in pushback on DCP-07
  (agent-to-agent conflict arbitration) — that's where the design was
  hardest to commit to.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: