Jan 27, 2025 · 26 comments · 46 replies
-
Pre-submission ChecklistThe purpose of this is to sketch a baseline set of required functionality for a public industry standard registry of MCP Servers. A number of sites have popped up in recent months. For example:
https://mcpserver.cloud/
https://mcp.run/
https://smithery.ai/
https://block.github.io/goose/v1/extensions/
While there is value in different server browsers and client integrations for MCP, there will be additional value in a “single-source-of-truth” registry containing the metadata about MCP servers themselves. Right now each of these sites has its own copy of data, relying on additions by maintainers or contributors. They each present a subset of all available MCP servers globally, and duplicate much of the storage and search logic. Ultimately this presents a fragmented view of what is available to end-users.
In contrast, a single widely adopted registry will be a bedrock resource that higher level tools interfacing with MCP servers can leverage.
Feature Requirements Global Public APIWe need a robust API serving metadata about every server, as well as artifact download URIs, search functionality (via utility and categories), new server publishing, storage, tagging, versioning, etc.
This will allow multiple server browsers / client install flows to emerge, while maintaining and deriving the benefits of a single source of all metadata.
Server BrowserSimilar to https://www.npmjs.com/ we should have a standard server browser that implements and exposes a UX for these feature requirements. This is not to say we will discourage other browsers, but that to pair with the global public API there should be at least one officially maintained server browser.
Curation and SegmentationThere should be support in the API and UX for browsing MCP servers of notable utility (popular, most installed, new this week) as well as specific categories for the services they connect to (finance tools, fitness, wellness, etc).
SecuritySecurity should be taken as a first class consideration in the registry. We should implement automated code scanning looking for traditional CVE (common vulnerabilities & exposures) as well as analysis specific to MCP servers (adherence to authorization spec, scanning for prompt injection, etc) that will become more clear over time. The global public API should also be protected against publishing / DDoS attacks.
Further Exploration Unified RuntimeWe could explore a unified runtime for MCP servers (a la npx) that would work for MCP servers written in any language. This would simplify the installation and usage flow for client integrators.
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
All reactions-
Thanks for writing this up! I'm increasingly feeling like this is the right path forward too.
Before, I had been reluctant for us to undertake this because it felt like building npm/pypi from scratch—but actually, if we assume the continued use of package managers' own registries, we can make this strictly a metadata service about servers. That dials the complexity and security risk way down. We could also integrate something like socket.dev to provide some basic level of security assessment about the underlying packages.
Does it sound right that we should require packages to be published to some underlying registry first (npm, pypi, Docker), and then they should be explicitly submitted/registered with the metadata registry afterward?
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
4 replies
-
Excellent
Does it sound right that we should require packages to be published to some underlying registry first (npm, pypi, Docker), and then they should be explicitly submitted/registered with the metadata registry afterward?
Yes, I agree. This could potentially offload quite a bit of responsibility in storage etc as well.
Beta Was this translation helpful? Give feedback.
-
MCP servers utilize many platforms beyond Python and JavaScript/TypeScript scripting. I've observed implementations in Golang, Rust, Java, and C#.
Docker would be ideal, but most users don't employ it. Docker would also be limited for mobile applications. Many tools remain unpublished and exist primarily as scripts. Entry points vary widely—sometimes even using Makefiles.
This creates an environment without standard artifacts like those found in NPM, PyPI, or Docker registries. The ecosystem is highly fragmented. Without artifacts, there are no checksums either.
How can we meaningfully discuss a registry in this context? Scripts have diverse entry points and setup procedures. Even within Python, developers might use venv, uv, or raw Python. Additionally, scripts require different parameters through arguments and environment variables, not to mention tokens and secrets needed for tool setup.
In enterprise environments, validating artifact sources and preventing supply chain issues is critical. How can we accomplish this with scripts? The official list at https://github.com/modelcontextprotocol/servers presents a mixed collection.
Would a directory with metadata be more appropriate?
What value would such a directory provide without standardized installation processes in the metadata? Or should we consider restricting it?
Another consideration is the communication method: STDIO versus SSE. With OAuth now implemented in MCP, SSE could even function as SaaS. So you have:
STDIO (local)
SSE (local)
SSE (remote)
Each local setup introduces its own complexity regarding artifacts, signing, and supply chain security.
Scanning tools might be effective for Python and NPM code, but they would be limited for Golang, Rust, C#, or C++, where compiled code is inherently obfuscated.
Many tools are written by AI without rigorous code reviews. STDIO would present minimal attack surface, but SSE introduces greater risk due to its remote nature. Some SDKs aren't officially vetted by the MCP team.
Eventually, MCP might be ported to mobile platforms, further fragmenting SDKs, tools, and packages. While mobile could rely on SSE, one of MCP's key advantages is local access to files and data. Therefore, STDIO deployment on mobile seems likely.
Consider a future scenario where Anthropic offers MCP tools in the Claude Desktop UI. How could they vet tools they didn't create themselves? Already, some tools are broken or buggy, raising security concerns.
Relying on NPM, PyPI, or Docker for trust is problematic. Docker offers no security guarantees, as flawed images can be pushed without issue. NPM and PyPI perform some scanning for code obfuscation, but what about Rust, Go, C#, and C++ binaries?
Remote SSE setup would be relatively straightforward as it would primarily involve APIs. The main challenge concerns local setups.
We need to develop proper packaging and one-click deployment solutions.
Cline has adopted a "marketplace" approach that lists GitHub URLs and uses models to fetch information, leveraging AI for installation. This isn't particularly standardized.
A similar approach exists with the MCP-installer—an MCP that leverages AI for installation.
Beta Was this translation helpful? Give feedback.
{{actor}} deleted this content .
-
And I forgot in the above the mutliplatform side:
Linux/Windows/Mac
X86/ARM
For now until we have the mobile, if that happen one day. But I see already Kotlin SDK kicking.
Beta Was this translation helpful? Give feedback.
-
I agree with the opportunity here, namely to create a "single source of truth" of all the MCP servers out there and deduplicate all the effort being made to identify and collate what's been built.
Definitely agree on having a global, public API for providing access to this bedrock data.
Also agree that reimplementing npm/pypi is out of scope -- leave the source code hosting to those solutions.
I also like the idea of a base "Server Browser" implementation, with allowing the market to potentially improve on the "server discovery UX" by implementing their own take on top of the global, public API.
I'm not as sold on "curation and segmentation", "security", or "unified runtime" as something that should definitely be solved within the registry. I think this could potentially be separated out and be concerns tackled by third party "Server Browsers" - of which the native official "Server Browser" is just a simple implementation that does not offer much in the way of opinionated tags or security guarantees. But maybe we could take these on a case by case basis after an initial official registry exists.
Does it sound right that we should require packages to be published to some underlying registry first (npm, pypi, Docker), and then they should be explicitly submitted/registered with the metadata registry afterward?
I think this should be true for open source packages that are meant to run on a local machine. But I think this "centralized registry" solution should also accept closed source, SSE server submissions.
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
5 replies
-
Fundamentally, I think the big initial value-add here of centralizing the single source of truth is that:
The curation/security/runtime ideas would be nice to have but not necessary to achieve these two benefits^
Beta Was this translation helpful? Give feedback.
-
Agree with most of these points! However, not sure about this:
I think this "centralized registry" solution should also accept closed source, SSE server submissions.
I see the value in being able to offer a directory for things like this, but OTOH it introduces a lot of risk (security, privacy, and others) to permit servers that are hosted but whose implementation isn't available to look at somewhere.
Beta Was this translation helpful? Give feedback.
-
but OTOH it introduces a lot of risk (security, privacy, and others) to permit servers that are hosted but whose implementation isn't available to look at somewhere.
I agree that the risk is something that needs to be addressed. My pitch would be to build around mitigating the risk rather than avoiding it, because I think reducing the friction to set up/maintain servers - supporting using servers over HTTP - is important to broadening MCP adoption.
Some sources of inspiration:
Some specific paths forward, which I think could all be explored in parallel with still collecting SSE URL's into the registry:
Beta Was this translation helpful? Give feedback.
{{actor}} deleted this content .
-
Great back and forth here!
Regarding closed source SSE addressable servers being part of the registry, I lean more towards not representing a server where there isn't a publicly available implementation in the public registry as @jspahrsummers said. But I could be convinced if we do feel it's critical to include them!
Some ideas for guardrails that wouldn't be a full requirement of a public/scannable implementation:
rather than certify the code, certify the organization that writes or hosts the code
This also sounds good as it would allow companies to expose servers as public APIs for their own services without an open implementation and still give the user some degree of confidence it's from the verified source.
To me it's just about trying to bake in as many helpful guarantees to users as we can, and building with good security principles in mind from day one.
I am interested in further discussion 👍
Beta Was this translation helpful? Give feedback.
-
I mean, what is the standard for discovering a website / server today? Does closed source sse servers need to be in a registry or something that can just be handled by traditional information retrieval of web search? Feel like that's a bit of reinventing the wheel and server owners can integrate with existing web discovery mechanisms
Beta Was this translation helpful? Give feedback.
-
My recommendation would be that there be a standard json/yaml specification and or configuration file that is used that is implementation agnostic (Think terraform). That standardized spec/config is converted by MCP server implementations into a functional service. Industry originating specs/ configs could then be published for deployment on any MCP server service package
Its really inhibitive for each server developer maintain multiple versions that are not readily extensible and for developers / service deployers to not have this sort of modular approach.
This approach calls for the following:
In addition to the value added circumstances that everyone mentions above, this approach could stream line adoption and get us out of a proprietary / snowflake solution trajectory (which is where it seems this could go).
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
1 reply
{{actor}} deleted this content .
-
Correct me if I'm misunderstanding your proposal, but it sounds like this:
That standardized spec/config is converted by MCP server implementations into a functional service.
Is very different than @alexhancock 's suggestion of a centralized metadata service. If every server is responsible for serving its own configuration file, then we'd still have the problem of all these third party server-discovery scrapers duplicating effort and fragmenting the ecosystem.
Perhaps your commentary here is a better fit for the .well-known/mcp discussion. Though I do think that discussion and this one are intertwined; if we have this centralized metadata service, I'm not sure we still need the .well-known file/directory. Or at least we don't need it focused on the same "discovery" purpose that other discussion started with.
Beta Was this translation helpful? Give feedback.
-
+1 to have an official place where people can submit their servers, clients etc.
This should be hosted under modelcontextprotocol.io or one of its subdomains.
It's frankly quite crazy to think that I have to submit my server to 4-5 different websites manually for them to be picked up by Google/SEO so that people can discover them, additionally I noticed that when I push an update to my own MCP server, not all website "pull" the update from the Github repo (where I host it/open source)
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
0 replies
-
All good responses here. Thanks for the thoughtful commentary and ideas @tadasant!
I'm pretty aligned and look forward to figuring out a more specific plan soon. I'll be on vacation a few days beginning tomorrow, but back full time from next Wednesday.
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
1 reply
-
bro, look at my design
Architecture (Text Diagram):+-------------------+
| Service Client |
| (SSE Subscriber) |
| - Pulls service |
| list |
| - Receives real- |
| time updates |
+-------+----------+
| SSE Event Stream (text/event-stream)
v
+-------------------+
| MCP Service Hub |
| (Single-Node) |
| Core Features: |
| 1. Service Registry |
| 2. Health Checks |
| 3. SSE Push Engine |
+-------+----------+
| REST APIs (Register/Unregister)
v
+-------------------+
| MCP Server Nodes |
| (Providers) |
| - Register on |
| startup |
| - Send heartbeats|
+-------------------+
https://github.com/orgs/modelcontextprotocol/discussions/159#discussioncomment-12624829
Beta Was this translation helpful? Give feedback.
-
Maintainer of Smithery AI here!
I would definitely want to be kept up to date about MCP's official plans as Smithery currently offers both hosting and registry of MCPs. Open to discussing any potential integrations/collaborations that might deduplicate work.
Our registry API: https://smithery.ai/docs/registry
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
All reactions0 replies
{{actor}} deleted this content .
-
I believe the issue of registry fragmentation of having different registries of various levels of support and awareness is real. I'd also suggest that this might be duplicative of the discoverability topic. Though to offer a differing opinion to consider here... Registries are expensive to build and maintain with very little hope for revenue gen (I know this from experience and from discussions with NPM founders), this is what makes them hard to continue maintaining at scale. Propagation of MCP servers are not remotely to the point of "scaled" where this will be a challenge. So perhaps the "registry fragmentation" problem is a problem that will naturally go away unless someone can figure out a path to monetize the ability to support these at scale. To put it directly, I fully expect these registries to fall away as expenses build (though I do wish for everyone's efforts and ventures to be successful). So my suggestion would be not to create a cloud service for this discoverability but to lean into systems like GitHub, NPM, etc. to provide a way to openly capture, list, and publish updates to this list - take that as far as possible. Those that need it are technical teams and can easily sync from there. That should provide a long-term viable way to support these systems even as the lists start to hit a level of actual scale.
I also believe there are differing needs for MCP servers and I do not believe that, architecturally, MCP clients should have the breadth of all MCP servers available to use at the ready on disk. While a registry that can be stable and cost-effective for clients to sync to is a good initial solution, we will need to consider how to determine the context directories that always need to be available and what should be sourced on demand. For example, if I create an MCP server for a local service in town, I don't expect anyone to need that constant knowledge of its existence. I expect that leaning into the web as the on-demand system for that would be ideal.
What happens when there are 100k MCP servers, 10 million, 100m, more? Then the multiple of times that content needs to sync? Not suggesting we build for that scale now but preparing for the ideal outcome that this pattern takes hold means expecting these numbers.
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
1 reply
-
I think a good start would be to have a defined path on the local machine that can be used to store basic configuration info... command to execute, and or server address. Just have it so a json or yaml file could be dropped in it so the various utilities could scan when they startup and find mcp's that they could use. So Claude Desktop could see a new MCP has been installed, and Cursor could both see something has been installed for example.
Beta Was this translation helpful? Give feedback.
-
Have we explored existing standard formats like OCI? This can, out of the box, allow servers to be adopted by existing container registries (e.g., Quay, Dockerhub, ECR, and Github container registries). I.e., one standard vs. one registry.
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
1 reply
-
OCI is over kill and and manifests should have install steps to standarise install.
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
0 replies
{{actor}} deleted this content .
-
There are two categories of use cases here, and I think much of the discussion is missing that critical distinction (@sean-roberts points out the issue). The problem is likely because we're all developers, in many cases using MCP to build tools for ourselves where we know exactly which MCP servers we want to install (if only we had a registry). But we're not the only target audience for MCP; in fact, I'd argue we are the least important (at least in terms of audience size).
MCP, by itself, solves the problem of how to enrich a user agent with structured interaction capabilities, with a very precise level of control as opposed to the more general agents.json-esque approach. There's real value in exposing structured and secured interactions, regardless of enterprise or consumer use case. But unlike MCP servers for enterprise scenarios like software development, the interactions in most consumer use cases will depend almost entirely on the user's browsing context (how many apps do most people actually install on their devices?), and it's impossible to maintain a registry of all of those interactions. For example, imagine a world in which every WordPress site offers an MCP server for commenting on blog posts (perhaps as a site-specific link to a multitenant MCP server hosted by Automattic's Jetpack service, for example). Yahoo! tried to build a portal for the entire web in the '90s, but Google had the right idea: present search results using metadata ( tags, in this case) that are published by the websites themselves and that any search engine could use.
For enterprise scenarios, by all means, define a registry protocol and let different registries try to establish themselves. I'm sure GitHub Enterprise, Azure DevOps, and the rest will be interested in building a package feed for MCP servers according to whatever standard you choose. WASM as a common target makes sense to me, for what it's worth. Also, insisting on open-source isn't likely to be as helpful as people think—the xz Utils backdoor for example was fully open-source.
However, for the everyday AI and consumer scenarios, Anthropic and the MCP community need to embrace the Google philosophy of relying on the open web. I don't know enough to state which approach is best, and there are several competing would-be standards vying for attention (agents.json being one recurring theme), but discovery of arbitrary servers and APIs (which may not have an MCP server defined) via structured AI-oriented metadata—and then progressively enhancing those connections with MCP when it's available—is a far more impactful strategy in the consumer space.
EDIT: I should clarify that I'm both saying more focus needs to be placed on the discovery discussion and also that restricting MCP servers to being open-source, or in some language or other, is self-defeating when dealing with a world where every website potentially comes with one or several MCP servers. Registries are helpful for enterprise/developer local uses, but not for the broader world of consumer AI.
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
2 replies
-
bro, look at my design
Architecture (Text Diagram):+-------------------+
| Service Client |
| (SSE Subscriber) |
| - Pulls service |
| list |
| - Receives real- |
| time updates |
+-------+----------+
| SSE Event Stream (text/event-stream)
v
+-------------------+
| MCP Service Hub |
| (Single-Node) |
| Core Features: |
| 1. Service Registry |
| 2. Health Checks |
| 3. SSE Push Engine |
+-------+----------+
| REST APIs (Register/Unregister)
v
+-------------------+
| MCP Server Nodes |
| (Providers) |
| - Register on |
| startup |
| - Send heartbeats|
+-------------------+
https://github.com/orgs/modelcontextprotocol/discussions/159#discussioncomment-12624829
Beta Was this translation helpful? Give feedback.
{{actor}} deleted this content .
-
heavily +1 to this and far more descriptive of my comment here. We don't need to reinvent information retrieval and search for remote endpoints w/ search engine / page rank / personalization etc, it's a thirty year solved problem. For installing and running software yourself, of course a version controlled registry makes sense
Beta Was this translation helpful? Give feedback.
-
For technical or dev use, a npmjs like + CLI should be the key. Single point of truth with good version control is beneficial.
However, for common desktop client user, they might need different kind of forms of representation, ranking, hot-list even personalize bookmark and so on. That is a different story.
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
0 replies
{{actor}} deleted this content .
-
Proposal: Service Registration & Dynamic Push Mechanism for Model Context Protocol (MCP) What is it? (Concept Overview)This proposal introduces a Service Registration Hub to MCP, enabling dynamic discovery and management of MCP servers via Server-Sent Events (SSE).
Architecture (Text Diagram):+-------------------+
| Service Client |
| (SSE Subscriber) |
| - Pulls service |
| list |
| - Receives real- |
| time updates |
+-------+----------+
| SSE Event Stream (text/event-stream)
v
+-------------------+
| MCP Service Hub |
| (Single-Node) |
| Core Features: |
| 1. Service Registry |
| 2. Health Checks |
| 3. SSE Push Engine |
+-------+----------+
| REST APIs (Register/Unregister)
v
+-------------------+
| MCP Server Nodes |
| (Providers) |
| - Register on |
| startup |
| - Send heartbeats|
+-------------------+
Why is it like that? (Design Rationale) 1. Centralized Service Discovery
INSTANCE_ADDED
, INSTANCE_DOWN
), eliminating the need for polling.Server Startup → POST /register (IP:Port) → Hub stores metadata → SSE broadcast to clients
2. SSE Subscription for Clients
// Client subscribes to service updates const eventSource = new EventSource('/sse?service=mcp'); eventSource.onmessage = (event) => { const data = JSON.parse(event.data); updateServiceList(data); // Update local server list };3. Heartbeat & Health Checks
PUT /heartbeat
every 5 seconds.unhealthy
after 15s of no heartbeat.GET /services/mcp
.GitHub Profile: aliyun1024qjc
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
7 replies
{{actor}} deleted this content .
-
I don't know if I'm missing something, but it sounds extremely overengineered to me. As a user, I don't want my personal AI assistant to automatically pick up every MCP that Joe in their basement pushed somewhere on the internet. I want to interactively pick and choose reputable ones from a list that I trust, and that list only needs to be queried once when I open up a 'marketplace' dialog.
I also don't think the health check/heartbeat protocol is useful. Most web services of any significant adoption/scale are not just one server that is either 'up' or 'down'. They're hundreds/thousands/millions of servers constantly going in and out of rotation across dozens of globally distributed datacenters. A service being 'down' for one user doesn't mean it's 'down' for another user. And even then, it's not like as a user I'd want an MCP that I 'installed' to uninstall itself as soon as it had a 5 minute outage.
Beta Was this translation helpful? Give feedback.
-
I want to emphasize my earlier point: we need to consider the requirements for private registration centers in enterprise environments, rather than centralizing all MCP servers in a single public registry.
From our practical experience, enterprise users have strict requirements for security and control. In actual deployments, they typically don't allow AI assistants to connect freely to services on the internet, but rather prefer a strictly vetted and controlled list of trusted services.
I agree with @kevklam's point that an over-engineered central registration system indeed has issues. Users should be able to choose trusted services independently, rather than passively accepting all available options. This is especially important for enterprise users.
We should consider establishing a standardized "feed protocol" specification that allows:
Regarding health checks, I similarly believe that simple up/down detection has limited value. The complexity and regional differences of modern service architectures make single status indicators insufficient for accurately reflecting service quality.
In conclusion, we should avoid building an overly centralized global registry, and instead focus on creating a flexible registration protocol standard that supports distributed management, allowing enterprises and organizations to maintain and manage their trusted MCP service lists according to their own requirements.
Beta Was this translation helpful? Give feedback.
{{actor}} deleted this content .
-
I liked this idea. I was sketching something to build for my internal team and randomly came across this thread. In my opinion, there are a few ways to approach this — especially when thinking about companies with multiple areas or teams:
1 -Just some random ideas I was exploring — feels a bit related to your "enterprise" topic.
MCP Builder
Proposal
Each team creates its own MCPs
and Tools
, being responsible for handling incoming/outgoing requests. That includes exposing endpoints like /sse
or /message
and translating MCP Client
calls to standard REST APIs.
This "builder" layer would simplify the process for teams to expose their existing APIs, without changing MCP internals. Would be a UI where you can build those translations from MCP → REST.
Positive PointsAfter write this, maybe a terraform like way to deal with this is valid, like defining
MCP Servers
.Streamable HTTP
(pending proposal).MCP Discovery
Proposal
Teams register their MCPs with metadata like scope
, authentication type
, repository
, etc.
A centralized discovery service exposes a proxy that routes requests to the appropriate MCP. This proxy also helps abstract internal access (e.g., private VPN).
Example endpoints:
GET /discovery?area=SUPPORT
→ Returns only MCPs from the SUPPORT
area
GET/POST /discovery/{mcpId}/proxy/**
→ Exposes all routes like /sse
, /message
, /mcp
, etc.
→ This is the endpoint that the MCP Client would use.
Beta Was this translation helpful? Give feedback.
-
I think excessive design of mcp registry will lead to complex design, what we need to do now is to improve the mcp ecology instead of excessive design, your proposal is very good, I like the same idea as mine, what I don't understand is that they don't understand my plan
Beta Was this translation helpful? Give feedback.
-
"Positive Points
Centralized auth
Centralized control and visibility of all MCPs."
wat? perhaps a typo .. these are the complete opposite of positive
We will propose exactly the opposite while keeping Governance using blockchain and decentralized web2 protocols.
proposal is ongoing writing but we leverage Emercoin NVS and specifically EmerDNS/SSL and SSH.
Beta Was this translation helpful? Give feedback.
-
We are discussing whether this is a server directory similar to Docker Hub or a registration center that supports hosts in dynamically discovering servers. Personally, I believe an authoritative server directory is very useful, but a unified registration center is not very reasonable. However, the dynamic registration and discovery of servers can be part of the MCP protocol specification, used to guide the construction of privately deployed MCP clusters.
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
1 reply
-
can be decentralized DNS records and decentralized names (registry.mcp and subdomains.
Everybody can create subdomains but only the expressly defined as SD= record on the parent DNS subsystem will ignore it so there's governance about what SD to "enable" . The rest stays as unofficial and can be flagged if malicious.
Beta Was this translation helpful? Give feedback.
{{actor}} deleted this content .
-
A couple points I'd like to make:
A grand central registry, vs a registry protocol/specComing from the enterprise world, I believe that you're going to want to support multiple "feeds" (so that an IT admin can provide a list of known and vetted MCPs), but (a) standardize what a 'feed' looks like so that apps can build marketplaces/MCP pickers on top of feeds, and (b) provide an 'official' feed that the majority of MCP makers will publish their work to and that the majority of non-enterprise users will use by default. I believe @LarsKemmann was also suggesting something along these lines.
Trying to 'own' the only registry in the world would both create a very attackable single point of failure and alienate most of the enterprise industry.
Local vs Remote MCP serversI feel that the majority of this technical discussion is only complicated because MCP servers are currently all local and therefore there needs to be a story for somehow getting the server onto the user's local machine and running on whatever platform they use.
If you shelve that discussion for the time being, all you really need for a registry of remote MCP servers is something like a feed of endpoint URLs and descriptions. And I suspect remote servers are going to be the preferred tech for web services, for a variety of reasons. I'm not sure how many legitimate use cases there are where a local MCP server is the better choice - basically things like local filesystem access, local access to settings, screen contents and hardware (cursor, sound, network etc). Whereas there are hundreds/thousands of websites/web services that will potentially want to integrate with LLMs and be better served by remote MCP since those will be better able to integrate with both web and desktop based LLM apps.
This seems like it would be much simpler/cheaper to design/build/maintain, and could be stood up pretty quickly (once remote MCP's become a reality). Local MCP servers could even be split out into their own separate registry/package manager that you build at some later date - I don't see any real benefit to combining the two.
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
2 replies
-
"Trying to 'own' the only registry in the world would both create a very attackable single point of failure and alienate most of the enterprise industry."
+1 There can be no "owning", a provably fair and decentralized governance is a must in this post-truth, post-trust world.
we propose to use EmerNVS and web2 subsystem, it's BTC/BCH/BSV backed security by AuxPoW, nothing can rewrite that. Many have tried in the last 13 years of it's existence...
Beta Was this translation helpful? Give feedback.
-
Hi Everyone,
Agreed on the remote MCP servers. Is there a way to be sure an MCP server is secure, by the way? Some are asking for API keys, while we don’t really know who has published the MCP server (even in the case of common Google APIs, for instance).
Great to hear from the community on this as well
Best,
Guillaume
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
2 replies
{{actor}} deleted this content .
-
I suspect the issue will go away with a combination of:
You enter the URL of the service itself (e.g. google drive website, so you know it can be trusted) or choose it from a marketplace from within your LLM app (claude, chatgpt etc); it pops a login window that asks you for consent for the LLM app (as opposed to consent for the local MCP server published by someone random) to access your data in that service; click OK and it's done.
Before these things become a reality, your only real option is to inspect the code of the MCP in github.
Beta Was this translation helpful? Give feedback.
-
Yes a public key on a public blockchain and corresponding client certificate.
Imo We, as a species didn't came up with anything better yet; standard-compliance is paramount
Beta Was this translation helpful? Give feedback.
-
How about taking a decentralized approach similar to the Fediverse? Since MCP is a protocol, centralizing its index would just create a single point of failure, defeating many of its advantages. It seems unwise to put the future of MCP in the hands of just one administrator.
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
1 reply
-
+1 Thanks I was feeling lonely
Beta Was this translation helpful? Give feedback.
{{actor}} deleted this content .
-
What about just using the github url for the source of MCPs like golang and have the make a registry of MCPs like pkg.go.dev?
# golang go get -v github.com/modelcontextprotocol/example-app # mcp style? mcp get github.com/modelcontextprotocol/example-app
I was actually building something similar to MCP(hyperpocket) with my team and moving on to the idea for extending MCP cause you guys seem to become the standard now.
But sharing the idea and the pain-points that we had, one of them was a debate to have a global registry or just go with github url to pull everything that you can launch - like golang package system.
We've chosen just to use github url and make them searchable on github somehow - in future, we thought about separating just the registry part like https://pkg.go.dev/, but the sources remaining on github.
I'm sharing our example code source
from hyperpocket_langgraph import PocketLanggraph from langgraph import AgentGraph # Load tools with Hyperpocket pocket = PocketLanggraph(tools=[ "https://github.com/vessl-ai/hyperpocket/tree/main/tools/slack/get-messages", "https://github.com/vessl-ai/hyperpocket/tree/main/tools/github/list-pull-requests", ]) tool_nodes = pocket.get_tool_node(should_interrupt=True) # Define the LangGraph workflow graph = AgentGraph() graph.add_node("schedule_message", tool_nodes) graph.connect("start", "schedule_message") graph.connect("schedule_message", "end") # Execute the workflow graph.execute({"channel": "general", "message": "Team meeting at 3 PM"})
P.S. About the need of not just tool prototol interface but unified "execution interface"
To load the tool and execute them, you need a protocol how to initiate and launch a tool - as our approach was to have a containerized runtime of tool and communicate with stdio, as similar as MCP servers, but not on process.
If you go with the pkg.go.dev style, you might want to consider this interface too.
To achieve that, We've made a pocket.json
(example - get-slack-messages so that you can define tool schema, install and run scripts.
{ "tool": { "name": "slack_get_messages", "description": "get slack messages", "inputSchema": { "properties": { "channel": { "title": "Channel", "type": "string" }, "limit": { "title": "Limit", "type": "integer" } }, "required": [ "channel", "limit" ], "title": "SlackGetMessageRequest", "type": "object" } }, "auth": { "auth_provider": "slack", "scopes": ["channels:history"] }, "language": "python", "entrypoint": { "build": "pip install .", "run": "python -m get_message" } }
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
0 replies
-
What I would request on the metadata part of the registry API is something similar to config json in Smithery: https://smithery.ai/docs/config Where non technical users don't have to understand args, env vars but just put their secrets in a UI form with textboxes. So it will be easier for broader audience to adopt.
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
0 replies
-
In this regard, I am currently conducting a security assessment if you have projects to submit in order to contribute to the standardization of good security practices for the MCP. It's fascinating to see how we jump to new protocols without going through the standardization process.but business reasons always take precedence. Have a nice sunday for all Le dim. 13 avr. 2025 à 03:10, Raphael Kieling ***@***.***> a écrit :
…I liked this idea. I was sketching something to build for my internal team and randomly came across this thread. In my opinion, there are a few ways to approach this—especially when thinking about companies with multiple areas or teams: Just some random ideas that i was thinking, feel that is a bit related to your "enterprise" topic 1 - MCP Builder: Each team can create its own MCPs and tools, being responsible for handling the incoming/outgoing requests. That means exposing endpoints like /sse or /message to clients and translating MCP Client requests to REST APIs. Would be literally a builder, create the translation and make 100% easier to the teams to expose their existing APIs without touch in any MCP code. The downside is that this service would also be responsible for managing all the stateful MCP Servers. Hopefully, the new Streamable HTTP proposal will be released soon to make that easier. 2 - MCP Discovery: Teams can register their MCPs along with metadata like scope, authentication type (some services require specific auth and the LLM gateway for a given area might not have access), repository , etc. This would allow us to expose a proxy that targets the desired MCP. While returning the raw endpoint is possible, having a unified way to access them is more convenient and could hide a private VPN behind it. For example: •GET /discovery?area=AI_TECH&authentication=INTERNAL_ADMIN: Returns only MCPs from the AI_TECH area that also support the INTERNAL_ADMIN auth scope. •GET/POST /discovery/{mcpId}/proxy/**: Exposes all endpoints like /sse, /message, /., this is the endpoint that the mcp client would use. — Reply to this email directly, view it on GitHub <
#159 (reply in thread)>, or unsubscribe <
https://github.com/notifications/unsubscribe-auth/ADZCZ3BKDIDFVGW3DCPDIF32ZG2RBAVCNFSM6AAAAABV6PEGTKVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTEOBRGYZDONA> . You are receiving this because you are subscribed to this thread.Message ID: <modelcontextprotocol/.github/repo-discussions/159/comments/12816274@ github.com>
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
1 reply
{{actor}} deleted this content .
-
" It's fascinating to see how we jump to new
protocols without going through the standardization process.but business
reasons always take precedence."
+1 and that's gonna cause the downfall of most who does try to invent a parachute after jumping off the plane.
I indeed have a project to submit, proposal not quite ready but I have to acknowledge this post now.
it's Emercoin blockchain EmerNVS with Privateness.network tooling with a World Object Mapper structure based on our github/ness-network/worm
genesis block in dec 2013, blockchain has been running since, backed by AuxPoW with BTC/BCH/BSV "reusing" the PoW makes it more efficient. It should power countless chains but as you said, business reasons take precedence.
We propose a neutral ground where all welcome and governed by consensus
Beta Was this translation helpful? Give feedback.
-
I propose a design for the MCP Server Registry (MSR)。
Principles:
Arch:
┌───────────────────┐ ┌───────────────────┐
│ │ │ │
│ MCP Yellow Pages │ │ MCP Root Registry │
│ │ │ │
└───────────▲───────┘ └───────────▲───────┘
│ │
│ │Resolve && Register
│ │
│ ┌───┼──────────────────┐
│ │ │
│ │ MCP Private Registry ◄────────────────┐
│ │ │ │Register MCP Server Name
│ └─────────▲────────────┘ │ Description
│ │ │ Server Address
│ │ │ and etc.
│ │Name Resolve │
│ │ ┌──────┼───────┐
│ │ ┌───────────────► MCP Server 1 │
│ │ │ └──────────────┘
│Subscribe(Optional) ┌─────┼──────┤ ┌──────────────┐
└──────────────────────┼ MCP Client ├───────────────► MCP Server 2 │
└────────────┤ └──────────────┘
│ ┌──────────────┐
└───────────────► MCP Server 3 │
Auth └──────────────┘
List Tools
Call
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
0 replies
-
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
2 replies
{{actor}} deleted this content .
-
control daemon overhead, need root (ok podman thus far) but some people like to get their hands dirty. I don't think any 1 solution should take it all in everything.
edit: but I think it's good news for the standard; this thing just became factors of magnitudes more accessible.
are proposals for registry still open?
I can resume in a few words: decentralized, uncensorable yet governable by dPKI Emercoin blockchain (means decentralized web2 protocols, no web3 panopticon, turing-incomplete no surprises)
Metadata onchain, data on decentralized storage, IPFS, Magnet links, transparent as all handled by EmerDNS RFC-1035-compliant.
We kicked Circle out via UD when they tried to take the .coin tld some may know the saga.
.coin domains are working perfectly btw, cost 0
Beta Was this translation helpful? Give feedback.
-
Docker have a lot of issues like using file system. At lease the tool as separate docker.
I had that for month's and I was strong believer that docker is the right way to ship then moved back fully to dev container, with another more portable setup.
Docker may be good for SSE or similar, but local access sorry no go.
Beta Was this translation helpful? Give feedback.
{{actor}} deleted this content .
-
Züs-MCP Registry ProposalIt's simple. Any MCP service provider can get permission-less registry and verifiable setup, by using Züs as their storage backend. Since data is at the heart of MCP services, Züs not only stores data securely, but also acts as a built-in registry for service providers, with attributes like provenance and bulletproof security—check out Blimp.software. Züs is an open network, allowing you to self-host storage servers and deliver essentially an on-prem solution.
Here’s the trick:
WalletID = the organization.
AllocationID = the specific service (like a dataset).
Since a company can have multiple wallets, and each wallet can handle multiple allocations, it’s a scalable, decentralized, and permission-less system. No need for extra infrastructure or jumping through hoops—you just store your MCP data on Züs and you’re part of the system.
With Blimp.software (to quickly set up wallets and allocations and view data), Zs3server (to work with S3-compatible storage), and Atlus.cloud (to explore the blockchain), everything’s secure, verifiable, and self-sovereign—no central authority needed. Plus, any third-party registry can curate a list of MCP servers straight from the blockchain, filtering by industry, region, or whatever they want.
The Building Blocks1. Blimp.software
2. MCP Server
3. Zs3server
4. Atlus.cloud
Step 1: Set Up Your Wallet & Allocation
Step 2: Store Your MCP Data
Step 3: Get Verified
Self-sovereign Identity:
Your WalletID + AllocationID is yours—no need for a central registry.
Others (like industry registries) can curate server lists from the blockchain.
Provenance You Can Trust:
Every change is recorded on-chain, so there’s a clear audit trail as your data evolves.
Rock-Solid Security:
Zero-trust, split-key, distributed storage—your data and key are locked down tight and unbreakable.
ACID Integrity Built In:
2-phase commits and chained writemarkers mean your data operations are consistent and reliable.
Blazing Fast Performance:
Parallel threads make PUT/GET operations faster than AWS S3.
Refer to the attached diagram for the visual representation of the architecture.
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
All reactions10 replies
-
Mcp is the future and I am amazed that ZUS could support it.
+1 yes support it, not capture it; First this is 0chain after rebrand and hard fork of their backend private blockchain which is not FOSS.
If you're not familiar with token metrics these two are major Red flags when they show up: minting-enabled is automatic show-stopper when the supply can change unilaterally at anytime and the Top10 owners owning 89% of entire supply is just adding insult to injury.
while I find the system very good technologically, distributed is not decentralized and private blockchains is an oxymoron imo
https://gopluslabs.io/token-security/1/0xb9ef770b6a5e12e45983c5d80545258aa38f3b78
@Jeff-Bouchard there is no vendor lock-in, it's decentralized, and you can self-host. In fact a MSP is already using this system with their customers. IPFS, Arweave and Torrents are old and slow protocols and not enterprise ready. Please use blimp.software first and then let me know.
Beta Was this translation helpful? Give feedback.
{{actor}} deleted this content .
-
edit @guruhubb
I'm just saying that 2 basic common metrics everybody should look at are screaming RUN when they see your Tokenomics.
All Open and Public Registries have sets or rules and guidelines they must (or should) obey/conform to in order to justify Trust in them. I'm not making the metrics bro; I'm coin founder too so I know exactly what they mean but that doesn't mean I wouln't spin blubbers if I could but the private nature of the thing prevents me to do so if I'm not mistaking.
could I or anyone else spin nodes (blubbers) and participate for passive income?
Bvery feew things are binaty like that 1 or 0 and Decentralization is one of those things so your Yes/No answer will determine what it is or not.
Beta Was this translation helpful? Give feedback.
-
anyway Docker just announced their catalog and toolkit and each and every one MCPs out there that scrapers will be able to find as we're not exactly hiding, that'd defeat the purpose will just be converted to Docker compatible format like mcp-generator
the whole internet is the registry now.
I think it's already out of hand
Beta Was this translation helpful? Give feedback.
-
bro, send me your Twitter. I'll follow you and let's communicate
Beta Was this translation helpful? Give feedback.
{{actor}} deleted this content .
-
@Jeff-Bouchard you're mistaken - you're looking at the initial Eth contract. Our native blockchain and storage code is open source, and our mainnet is one of the fastest in the world with 370ms finality, built from scratch in Golang and for the enterprise. Our storage performance is 5x traditional cloud. MCP is one of our use cases - our core value is breachproof security and faster performance.
Anyways, your discussion on token economics is irrelevant because it doesn't control or affect the blockchain, nor our data layer. Again, I suggest you look at our docs, our explorer, our storage software app built on it, and then argue on specific points why our platform cannot be considered decentralized or make sense for this application.
could I or anyone else spin nodes (blobbers) and participate for passive income?
Yes, one MSP is already doing so and making money on this from their enterprise customers; I have posted their performance results on my LinkedIn posts. You'd use our Chimney app to receive income in tokens from Zus Network and even USD from enterprise customers, if you were to manage the entire solution.
Beta Was this translation helpful? Give feedback.
-
I'd like to contribute to this discussion with a solution we've been developing that could address the registry needs outlined here.
The MCP Server Manifest ConceptIn our initial proposal, we outlined a standardized approach for MCP server management using a decentralized yet organized system through mcp.json
manifest files.
This approach is inspired by both npm's package.json and ESM's URL-based imports, allowing MCP servers to be described consistently while remaining decentralized. The manifest contains standardized metadata including:
The manifest concept creates a middle ground between completely centralized repositories and the current fragmented landscape.
MCPBar: A Reference ImplementationWe've recently launched MCPBar, a reference implementation of this concept. MCPBar is a CLI tool that uses the standardized manifest format to simplify discovery, installation, and management of MCP servers across different clients.
MCPBar demonstrates how a registry based on this manifest concept can work in practice, with features like server search, simple installation, and standardized configuration.
Example manifest:{ "name": "github", "version": "1.0.0", "description": "GitHub MCP Server for AI tools using Model Context Protocol.", "homepage": "https://github.com/github/github-mcp-server", "repository": { "type": "git", "url": "https://github.com/github/github-mcp-server.git" }, "license": "MIT", "keywords": ["mcp", "ai", "github"], "inputs": [ { "id": "github_token", "type": "promptString", "description": "GitHub Personal Access Token", "password": true } ], "server": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "GITHUB_PERSONAL_ACCESS_TOKEN", "ghcr.io/github/github-mcp-server" ], "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "${input:github_token}" } } }Usage:
# Install an MCP server mcpbar install github/github-mcp-server # Search for servers mcpbar search github
This approach gives us the best of both worlds - decentralized publication but standardized discovery and installation. MCPBar could serve as the foundation for a standardized registry that addresses the requirements outlined in this discussion.
We'd love to get feedback from the community and potentially align our efforts with the official MCP direction. Full details are available on our blog and GitHub repository.
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
3 replies
-
The manifest format looks good. Here are a few suggestions:
secret: true
rather than password: true
in inputs.{github_token}
rather than ${input:github_token}
.Beta Was this translation helpful? Give feedback.
-
Worked on similar. I should publish the full API + schemas. OSS.
There is a lot of problems, if you analyze the MCP rolling as there is so different setup.
Beta Was this translation helpful? Give feedback.
-
A first great starting point would be an official spec how an MCP API can be described in a single, self contained JSON file, including some info about the server and how to connect. Similar to OpenAPI for REST APIs. I would assume that one server can potentially also host multiple MCP APIs.
From there, you can put a thinner publishing / discovery / federation layer on top. Concretely: How you can provide those JSON file(s), how a bigger registry would make the information available via a convenient API and (later maybe) how registries can sync the metadata between them efficiently, without creating conflicts.
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
0 replies
-
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
1 reply
-
And some context for how we're approaching this here: modelcontextprotocol/registry#11
Other Discussions in that repo have started to parcel out many of the considerations the community raised throughout this thread
Beta Was this translation helpful? Give feedback.
-
Just providing an alternative viewpoint here:
While I agree with this concept in principle - new standards, centralized repositories, and single sources of truth very rarely succeed in practice. We already have established patterns for how users interact with new products and services on the internet: the World Wide Web supported by search engines, online forums, third-party resources, wikis, advertising, etc.
This ecosystem works well precisely because it accommodates the complexities and differences between applications. Trying to conform them to a single standard of governance is going to be virtually impossible, and in some ways even restricts the explorability & potential for LLMs & AI Agents to engage with the ecosystem ("sorry you can't use that app its not a part of the registry"). These existing patterns already effectively guide users to finding websites, tools, and platforms, and it will do the same with discoverability of MCP servers & AI agents.
This proposal feels reminiscent of the early internet where we had websites would simply provide lists of resources—an approach we've since evolved beyond because once the internet grew beyond 10 websites, half of which were dancing cats, it was no longer feasible to maintain. Our existing decentralized approach is more flexible, sustainable, and allows for organic growth and innovation, which will likely be severely hampered by a centralized, constrained governance approach.
Beta Was this translation helpful? Give feedback.
You must be logged in to vote
1 reply
-
Agreed with you @rtyhgfvbn. Having a centralized MCP service list seems unrealistic, but it would be interesting to have a federation system instead - where each organization maintains control over their own MCP catalog while sharing information through a decentralized trust network. This way we could discover services across different providers without sacrificing autonomy or creating a single point of failure.
Beta Was this translation helpful? Give feedback.
Heading
Bold
Italic
Quote
Code
Link
Numbered list
Unordered list
Task list
Attach files
Mention
Reference
Menu reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emojiYou can’t perform that action at this time.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4