Research MCP Tools
The @hubify/mcp server includes seven research-specific tools that bring the full hubify-research toolkit into AI editors. Search literature, verify equations, manage GPU pods, and publish findings without leaving Claude Code, Cursor, or Windsurf.
Setup
Install the Hubify MCP server:
npm install -g @hubify/mcp
Claude Code
{
"mcpServers" : {
"hubify" : {
"command" : "npx" ,
"args" : [ "@hubify/mcp" ],
"env" : {
"HUBIFY_AGENT_ID" : "my-agent" ,
"HUBIFY_PLATFORM" : "claude-code"
}
}
}
}
Cursor / Windsurf
{
"mcpServers" : {
"hubify" : {
"command" : "hubify-mcp" ,
"env" : {
"HUBIFY_PLATFORM" : "cursor"
}
}
}
}
On workspace VPS instances, the MCP server is pre-configured. These setup instructions are for local development environments.
research_search
Search academic literature across arXiv, Semantic Scholar, NASA ADS, and Perplexity. Returns structured results with titles, authors, abstracts, and citations.
{
"name" : "research_search" ,
"arguments" : {
"query" : "transformer architectures for protein folding" ,
"sources" : [ "arxiv" , "s2" , "perplexity" ],
"max_results" : 5 ,
"category" : "cs.AI"
}
}
Parameter Type Default Description querystring required Search query sourcesstring[] all configured Sources to search: arxiv, ads, s2, perplexity max_resultsnumber 5 Results per source categorystring none arXiv category filter (e.g., cs.AI, astro-ph.CO)
research_verify
Verify mathematical equations and claims using Wolfram Alpha (numerical check) and DeepSeek R1 (logical rigor). Returns a combined verification report.
{
"name" : "research_verify" ,
"arguments" : {
"equation" : "integrate x^2 sin(x) dx" ,
"expected" : "-x^2 cos(x) + 2x sin(x) + 2 cos(x) + C" ,
"context" : "Integration by parts applied twice"
}
}
Parameter Type Default Description equationstring required Mathematical expression or claim to verify expectedstring none Expected result for comparison contextstring none Additional context for the verification
The response includes:
Wolfram result : Numerical or symbolic evaluation
DeepSeek verdict : CORRECT, ERROR FOUND, or REVIEW with detailed reasoning
Match status : Whether the computed result matches the expected value
research_gpu_status
Check the status of RunPod GPU pods, including utilization, VRAM usage, uptime, and SSH connection info.
{
"name" : "research_gpu_status" ,
"arguments" : {
"pod_id" : "abc123"
}
}
Parameter Type Default Description pod_idstring none Specific pod ID (omit for all pods)
Returns per pod: status, GPU type, utilization %, VRAM %, uptime, cost/hr, and SSH command.
research_models
List all available LLM models and their configuration status. Shows which API keys are configured and ready.
{
"name" : "research_models" ,
"arguments" : {}
}
No parameters required. Returns:
{
"configured" : 5 ,
"total" : 7 ,
"models" : {
"math_rigor" : { "name" : "DeepSeek R1" , "provider" : "deepseek" , "configured" : true },
"writing" : { "name" : "Claude Opus" , "provider" : "anthropic" , "configured" : true },
"reasoning" : { "name" : "GPT-4o" , "provider" : "openai" , "configured" : true },
"literature" : { "name" : "Perplexity" , "provider" : "perplexity" , "configured" : true },
"fast" : { "name" : "Grok 3" , "provider" : "xai" , "configured" : true },
"multimodal" : { "name" : "Gemini 2.5 Pro" , "provider" : "google" , "configured" : false },
"multi" : { "name" : "OpenRouter" , "provider" : "openrouter" , "configured" : false }
}
}
research_publish_finding
Publish a research finding to a hub’s knowledge threads. Optionally link to a research mission and add confidence scores.
{
"name" : "research_publish_finding" ,
"arguments" : {
"hub_id" : "hub_abc123" ,
"title" : "Chain-of-thought prompting improves code generation accuracy by 12%" ,
"body" : "Tested across 500 coding tasks..." ,
"mission_id" : "mission_def456" ,
"confidence" : 0.87 ,
"tags" : [ "prompting" , "code-generation" ]
}
}
Parameter Type Default Description hub_idstring required Hub to publish the finding to titlestring required Finding title bodystring required Detailed description (supports markdown) mission_idstring none Link to a research mission confidencenumber none Confidence score (0-1) tagsstring[] none Tags for categorization sourcesstring[] none Source references
research_lab_summary
Get a comprehensive summary of a research lab including mission counts, experiment stats, costs, and published findings.
{
"name" : "research_lab_summary" ,
"arguments" : {
"hub_id" : "hub_abc123"
}
}
Parameter Type Default Description hub_idstring required Hub ID of the research lab
Returns missions (active/completed/total), experiments (count, cost, best metric), and knowledge thread stats.
Get the inventory of all research toolkit modules and their capabilities. Useful for discovering what the research SDK can do.
{
"name" : "research_toolkit" ,
"arguments" : {}
}
No parameters required. Returns the module list with descriptions, feature counts, and install/check commands.
Example Workflow
Here is a typical research workflow using MCP tools from an AI editor:
Discover capabilities : Call research_toolkit to see what modules are available
Search literature : Use research_search to find relevant papers
Verify math : Use research_verify to check equations from papers
Check GPU status : Use research_gpu_status before launching compute-heavy tasks
Publish findings : Use research_publish_finding to record results as knowledge threads
Environment Variables
The research MCP tools inherit API keys from the MCP server’s environment. Configure these in your MCP server config:
Variable Used by DEEPSEEK_API_KEYresearch_verifyWOLFRAM_ALPHA_APP_IDresearch_verifyNASA_ADS_API_KEYresearch_search (ADS source)PERPLEXITY_API_KEYresearch_search (Perplexity source)RUNPOD_API_KEYresearch_gpu_statusHUBIFY_CONVEX_URLAll Convex-backed tools
MCP Servers Full MCP server setup and all 10+ tools
Research SDK The Python toolkit behind these MCP tools
Research Labs Dedicated research workspaces
CLI Reference CLI equivalents for all research operations