Edited By
Pedro Gomes

A fresh approach to AI agent attestation on Nostr has stirred mixed reactions among people in online forums. The introduction of 11 event types designed to enhance agent identity is controversial, as many argue it complicates a space already filled with "buzzwords."
The newly implemented custom event kinds range from ensuring agent presence to measuring performance. Hereโs a brief overview of the types proposed:
30010: Presence (agent online attestation)
30011: Participation (task completion proof)
30012: Belonging (team membership)
30013: Witness (third-party observation)
30014: Delegation (authority transfer)
30015: Compute (resource usage proof)
30016: Research (source citation)
30017: Consensus (multi-agent agreement)
30018: Audit (security assessment)
30019: Deployment (release attestation)
30020: Benchmark (performance measurement)
While the proposal aims to build a robust attestation framework, feedback from users has not been all positive. Many took to user boards to express their skepticism. One commenter stated, > "This is pure brainrot. What ruins Nostr is people treating it as a sewage."
Sentiments ranged from outright rejection to caution. With strong words like "bullshit jackpot" floating around, it's clear that the push for an extensive event framework doesnโt sit well with everyone.
Interestingly, users pointed to concerns about the structureโs complexity. Some questioned how this will truly enhance e-commerce and agent reputation. โIf this proves anything, itโs that simplicity could be more effective,โ argued another participant in discussion forums.
โ 847 proofs and 312 badges have already been published.
โ Users question the necessity of a complicated framework for AI agents.
โก Comments reveal a growing divide on the platform regarding its purpose.
The proposed infrastructure aims to enhance trust through verifiable attestations. Yet, when the community weighs in, the question remains: Will this new layer of complexity benefit or hinder the AI space?
There's a strong chance that as this AI agent attestation infrastructure rolls out, more people will rally for a simpler approach. If the complexity of the event types fails to gain traction, many may abandon the new framework altogether, resulting in a push for streamlined solutions. Experts estimate around 60 percent of the community could gravitate toward a more user-friendly method if the status quo doesn't change within the next few months. This reflects a broader desire for efficiency in a space where clarity often takes a backseat to sophistication.
Looking back, the introduction of credit scores in the late 20th century serves as an interesting parallel. Initially, a complicated system of evaluating borrower trustworthiness led to widespread confusion and pushback. It wasnโt until the criteria were simplified and made accessible that people began to embrace it. Similarly, as the tech community weighs these new event types' value, they may soon find that clarity and simplicity can lead to greater adoptionโpointing to a lesson learned in another era, where less truly became more.