Bittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge.
NIBittensorLLM
is developed by Neural Internet, powered byBittensor
.
This LLM showcases true potential of decentralized AI by giving you the best response(s) from the
Bittensor protocol
, which consist of various AI models such asOpenAI
,LLaMA2
etc.
Users can view their logs, requests, and API keys on the Validator Endpoint Frontend. However, changes to the configuration are currently prohibited; otherwise, the user's queries will be blocked.
If you encounter any difficulties or have any questions, please feel free to reach out to our developer on GitHub, Discord or join our discord server for latest update and queries Neural Internet.
Different Parameter and response handling for NIBittensorLLMimport json
from pprint import pprint
from langchain.globals import set_debug
from langchain_community.llms import NIBittensorLLM
set_debug(True)
llm_sys = NIBittensorLLM(
system_prompt="Your task is to determine response based on user prompt.Explain me like I am technical lead of a project"
)
sys_resp = llm_sys(
"What is bittensor and What are the potential benefits of decentralized AI?"
)
print(f"Response provided by LLM with system prompt set is : {sys_resp}")
""" {
"choices": [
{"index": Bittensor's Metagraph index number,
"uid": Unique Identifier of a miner,
"responder_hotkey": Hotkey of a miner,
"message":{"role":"assistant","content": Contains actual response},
"response_ms": Time in millisecond required to fetch response from a miner}
]
} """
multi_response_llm = NIBittensorLLM(top_responses=10)
multi_resp = multi_response_llm.invoke("What is Neural Network Feeding Mechanism?")
json_multi_resp = json.loads(multi_resp)
pprint(json_multi_resp)
Using NIBittensorLLM with LLMChain and PromptTemplate
from langchain.chains import LLMChain
from langchain.globals import set_debug
from langchain_community.llms import NIBittensorLLM
from langchain_core.prompts import PromptTemplate
set_debug(True)
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
llm = NIBittensorLLM(
system_prompt="Your task is to determine response based on user prompt."
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What is bittensor?"
llm_chain.run(question)
from langchain_community.utilities import GoogleSearchAPIWrapper
from langchain_core.tools import Tool
search = GoogleSearchAPIWrapper()
tool = Tool(
name="Google Search",
description="Search Google for recent results.",
func=search.run,
)
from langchain import hub
from langchain.agents import (
AgentExecutor,
create_react_agent,
)
from langchain.memory import ConversationBufferMemory
from langchain_community.llms import NIBittensorLLM
tools = [tool]
prompt = hub.pull("hwchase17/react")
llm = NIBittensorLLM(
system_prompt="Your task is to determine a response based on user prompt"
)
memory = ConversationBufferMemory(memory_key="chat_history")
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory)
response = agent_executor.invoke({"input": prompt})
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4